00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 983 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3650 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.053 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.053 The recommended git tool is: git 00:00:00.054 using credential 00000000-0000-0000-0000-000000000002 00:00:00.059 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.079 Fetching changes from the remote Git repository 00:00:00.084 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.109 Using shallow fetch with depth 1 00:00:00.109 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.109 > git --version # timeout=10 00:00:00.138 > git --version # 'git version 2.39.2' 00:00:00.138 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.158 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.158 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.145 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.156 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.168 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.168 > git config core.sparsecheckout # timeout=10 00:00:04.179 > git read-tree -mu HEAD # timeout=10 00:00:04.194 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.214 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.214 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.316 [Pipeline] Start of Pipeline 00:00:04.327 [Pipeline] library 00:00:04.329 Loading library shm_lib@master 00:00:04.329 Library shm_lib@master is cached. Copying from home. 00:00:04.342 [Pipeline] node 00:00:04.354 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:04.355 [Pipeline] { 00:00:04.363 [Pipeline] catchError 00:00:04.364 [Pipeline] { 00:00:04.373 [Pipeline] wrap 00:00:04.378 [Pipeline] { 00:00:04.384 [Pipeline] stage 00:00:04.385 [Pipeline] { (Prologue) 00:00:04.400 [Pipeline] echo 00:00:04.401 Node: VM-host-SM0 00:00:04.406 [Pipeline] cleanWs 00:00:04.417 [WS-CLEANUP] Deleting project workspace... 00:00:04.417 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.423 [WS-CLEANUP] done 00:00:04.598 [Pipeline] setCustomBuildProperty 00:00:04.673 [Pipeline] httpRequest 00:00:04.991 [Pipeline] echo 00:00:04.992 Sorcerer 10.211.164.20 is alive 00:00:05.001 [Pipeline] retry 00:00:05.003 [Pipeline] { 00:00:05.013 [Pipeline] httpRequest 00:00:05.017 HttpMethod: GET 00:00:05.017 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.018 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.019 Response Code: HTTP/1.1 200 OK 00:00:05.019 Success: Status code 200 is in the accepted range: 200,404 00:00:05.020 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.297 [Pipeline] } 00:00:05.306 [Pipeline] // retry 00:00:05.311 [Pipeline] sh 00:00:05.590 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.602 [Pipeline] httpRequest 00:00:06.184 [Pipeline] echo 00:00:06.186 Sorcerer 10.211.164.20 is alive 00:00:06.194 [Pipeline] retry 00:00:06.196 [Pipeline] { 00:00:06.209 [Pipeline] httpRequest 00:00:06.213 HttpMethod: GET 00:00:06.214 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:06.215 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:06.232 Response Code: HTTP/1.1 200 OK 00:00:06.232 Success: Status code 200 is in the accepted range: 200,404 00:00:06.233 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:55.221 [Pipeline] } 00:00:55.240 [Pipeline] // retry 00:00:55.249 [Pipeline] sh 00:00:55.535 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:58.085 [Pipeline] sh 00:00:58.364 + git -C spdk log --oneline -n5 00:00:58.364 c13c99a5e test: Various fixes for Fedora40 00:00:58.364 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:58.364 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:58.364 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:58.364 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:58.381 [Pipeline] withCredentials 00:00:58.392 > git --version # timeout=10 00:00:58.407 > git --version # 'git version 2.39.2' 00:00:58.424 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:58.426 [Pipeline] { 00:00:58.435 [Pipeline] retry 00:00:58.437 [Pipeline] { 00:00:58.452 [Pipeline] sh 00:00:58.733 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:58.743 [Pipeline] } 00:00:58.755 [Pipeline] // retry 00:00:58.759 [Pipeline] } 00:00:58.772 [Pipeline] // withCredentials 00:00:58.779 [Pipeline] httpRequest 00:00:59.192 [Pipeline] echo 00:00:59.194 Sorcerer 10.211.164.20 is alive 00:00:59.204 [Pipeline] retry 00:00:59.207 [Pipeline] { 00:00:59.221 [Pipeline] httpRequest 00:00:59.227 HttpMethod: GET 00:00:59.227 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:59.228 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:59.229 Response Code: HTTP/1.1 200 OK 00:00:59.230 Success: Status code 200 is in the accepted range: 200,404 00:00:59.230 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:06.818 [Pipeline] } 00:01:06.833 [Pipeline] // retry 00:01:06.839 [Pipeline] sh 00:01:07.120 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:08.510 [Pipeline] sh 00:01:08.801 + git -C dpdk log --oneline -n5 00:01:08.801 caf0f5d395 version: 22.11.4 00:01:08.801 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:08.801 dc9c799c7d vhost: fix missing spinlock unlock 00:01:08.801 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:08.801 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:08.817 [Pipeline] writeFile 00:01:08.830 [Pipeline] sh 00:01:09.109 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:09.119 [Pipeline] sh 00:01:09.399 + cat autorun-spdk.conf 00:01:09.399 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.399 SPDK_TEST_NVMF=1 00:01:09.399 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.399 SPDK_TEST_USDT=1 00:01:09.399 SPDK_RUN_UBSAN=1 00:01:09.399 SPDK_TEST_NVMF_MDNS=1 00:01:09.399 NET_TYPE=virt 00:01:09.399 SPDK_JSONRPC_GO_CLIENT=1 00:01:09.399 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:09.399 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:09.399 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:09.406 RUN_NIGHTLY=1 00:01:09.408 [Pipeline] } 00:01:09.422 [Pipeline] // stage 00:01:09.437 [Pipeline] stage 00:01:09.439 [Pipeline] { (Run VM) 00:01:09.453 [Pipeline] sh 00:01:09.743 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:09.743 + echo 'Start stage prepare_nvme.sh' 00:01:09.743 Start stage prepare_nvme.sh 00:01:09.743 + [[ -n 1 ]] 00:01:09.743 + disk_prefix=ex1 00:01:09.743 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:09.743 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:09.743 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:09.743 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.743 ++ SPDK_TEST_NVMF=1 00:01:09.743 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.743 ++ SPDK_TEST_USDT=1 00:01:09.743 ++ SPDK_RUN_UBSAN=1 00:01:09.743 ++ SPDK_TEST_NVMF_MDNS=1 00:01:09.743 ++ NET_TYPE=virt 00:01:09.743 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:09.743 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:09.743 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:09.743 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:09.743 ++ RUN_NIGHTLY=1 00:01:09.743 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:09.743 + nvme_files=() 00:01:09.743 + declare -A nvme_files 00:01:09.743 + backend_dir=/var/lib/libvirt/images/backends 00:01:09.743 + nvme_files['nvme.img']=5G 00:01:09.743 + nvme_files['nvme-cmb.img']=5G 00:01:09.743 + nvme_files['nvme-multi0.img']=4G 00:01:09.743 + nvme_files['nvme-multi1.img']=4G 00:01:09.743 + nvme_files['nvme-multi2.img']=4G 00:01:09.743 + nvme_files['nvme-openstack.img']=8G 00:01:09.743 + nvme_files['nvme-zns.img']=5G 00:01:09.743 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:09.743 + (( SPDK_TEST_FTL == 1 )) 00:01:09.743 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:09.743 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:09.743 + for nvme in "${!nvme_files[@]}" 00:01:09.743 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:09.743 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:09.743 + for nvme in "${!nvme_files[@]}" 00:01:09.743 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:09.743 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:09.743 + for nvme in "${!nvme_files[@]}" 00:01:09.743 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:09.743 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:09.743 + for nvme in "${!nvme_files[@]}" 00:01:09.743 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:09.743 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:09.743 + for nvme in "${!nvme_files[@]}" 00:01:09.743 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:09.743 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:09.743 + for nvme in "${!nvme_files[@]}" 00:01:09.743 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:10.002 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:10.002 + for nvme in "${!nvme_files[@]}" 00:01:10.002 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:10.002 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:10.002 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:10.002 + echo 'End stage prepare_nvme.sh' 00:01:10.002 End stage prepare_nvme.sh 00:01:10.014 [Pipeline] sh 00:01:10.297 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:10.297 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:01:10.297 00:01:10.297 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:10.297 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:10.297 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:10.297 HELP=0 00:01:10.297 DRY_RUN=0 00:01:10.297 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:01:10.297 NVME_DISKS_TYPE=nvme,nvme, 00:01:10.297 NVME_AUTO_CREATE=0 00:01:10.297 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:01:10.297 NVME_CMB=,, 00:01:10.297 NVME_PMR=,, 00:01:10.297 NVME_ZNS=,, 00:01:10.297 NVME_MS=,, 00:01:10.297 NVME_FDP=,, 00:01:10.297 SPDK_VAGRANT_DISTRO=fedora39 00:01:10.297 SPDK_VAGRANT_VMCPU=10 00:01:10.297 SPDK_VAGRANT_VMRAM=12288 00:01:10.297 SPDK_VAGRANT_PROVIDER=libvirt 00:01:10.297 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:10.297 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:10.297 SPDK_OPENSTACK_NETWORK=0 00:01:10.297 VAGRANT_PACKAGE_BOX=0 00:01:10.297 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:10.297 FORCE_DISTRO=true 00:01:10.297 VAGRANT_BOX_VERSION= 00:01:10.297 EXTRA_VAGRANTFILES= 00:01:10.297 NIC_MODEL=e1000 00:01:10.297 00:01:10.297 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:01:10.297 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:12.832 Bringing machine 'default' up with 'libvirt' provider... 00:01:13.769 ==> default: Creating image (snapshot of base box volume). 00:01:13.769 ==> default: Creating domain with the following settings... 00:01:13.769 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732141274_a7aab56d334f81ac90db 00:01:13.769 ==> default: -- Domain type: kvm 00:01:13.769 ==> default: -- Cpus: 10 00:01:13.769 ==> default: -- Feature: acpi 00:01:13.769 ==> default: -- Feature: apic 00:01:13.769 ==> default: -- Feature: pae 00:01:13.769 ==> default: -- Memory: 12288M 00:01:13.769 ==> default: -- Memory Backing: hugepages: 00:01:13.769 ==> default: -- Management MAC: 00:01:13.769 ==> default: -- Loader: 00:01:13.769 ==> default: -- Nvram: 00:01:13.769 ==> default: -- Base box: spdk/fedora39 00:01:13.769 ==> default: -- Storage pool: default 00:01:13.769 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732141274_a7aab56d334f81ac90db.img (20G) 00:01:13.769 ==> default: -- Volume Cache: default 00:01:13.769 ==> default: -- Kernel: 00:01:13.769 ==> default: -- Initrd: 00:01:13.769 ==> default: -- Graphics Type: vnc 00:01:13.769 ==> default: -- Graphics Port: -1 00:01:13.769 ==> default: -- Graphics IP: 127.0.0.1 00:01:13.769 ==> default: -- Graphics Password: Not defined 00:01:13.769 ==> default: -- Video Type: cirrus 00:01:13.769 ==> default: -- Video VRAM: 9216 00:01:13.769 ==> default: -- Sound Type: 00:01:13.769 ==> default: -- Keymap: en-us 00:01:13.769 ==> default: -- TPM Path: 00:01:13.769 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:13.769 ==> default: -- Command line args: 00:01:13.769 ==> default: -> value=-device, 00:01:13.769 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:13.769 ==> default: -> value=-drive, 00:01:13.769 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:01:13.769 ==> default: -> value=-device, 00:01:13.769 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:13.769 ==> default: -> value=-device, 00:01:13.769 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:13.769 ==> default: -> value=-drive, 00:01:13.769 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:13.769 ==> default: -> value=-device, 00:01:13.769 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:13.769 ==> default: -> value=-drive, 00:01:13.769 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:13.769 ==> default: -> value=-device, 00:01:13.769 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:13.769 ==> default: -> value=-drive, 00:01:13.769 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:13.769 ==> default: -> value=-device, 00:01:13.769 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:14.029 ==> default: Creating shared folders metadata... 00:01:14.029 ==> default: Starting domain. 00:01:15.408 ==> default: Waiting for domain to get an IP address... 00:01:33.496 ==> default: Waiting for SSH to become available... 00:01:33.496 ==> default: Configuring and enabling network interfaces... 00:01:36.786 default: SSH address: 192.168.121.65:22 00:01:36.786 default: SSH username: vagrant 00:01:36.786 default: SSH auth method: private key 00:01:38.692 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:46.815 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:52.084 ==> default: Mounting SSHFS shared folder... 00:01:53.988 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:53.988 ==> default: Checking Mount.. 00:01:55.365 ==> default: Folder Successfully Mounted! 00:01:55.365 ==> default: Running provisioner: file... 00:01:56.301 default: ~/.gitconfig => .gitconfig 00:01:56.559 00:01:56.559 SUCCESS! 00:01:56.559 00:01:56.560 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:56.560 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:56.560 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:56.560 00:01:56.569 [Pipeline] } 00:01:56.584 [Pipeline] // stage 00:01:56.594 [Pipeline] dir 00:01:56.594 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:01:56.596 [Pipeline] { 00:01:56.609 [Pipeline] catchError 00:01:56.611 [Pipeline] { 00:01:56.624 [Pipeline] sh 00:01:56.904 + vagrant ssh-config --host vagrant 00:01:56.904 + sed -ne /^Host/,$p 00:01:56.904 + tee ssh_conf 00:01:59.436 Host vagrant 00:01:59.436 HostName 192.168.121.65 00:01:59.436 User vagrant 00:01:59.436 Port 22 00:01:59.436 UserKnownHostsFile /dev/null 00:01:59.436 StrictHostKeyChecking no 00:01:59.436 PasswordAuthentication no 00:01:59.436 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:59.436 IdentitiesOnly yes 00:01:59.436 LogLevel FATAL 00:01:59.436 ForwardAgent yes 00:01:59.436 ForwardX11 yes 00:01:59.436 00:01:59.450 [Pipeline] withEnv 00:01:59.452 [Pipeline] { 00:01:59.466 [Pipeline] sh 00:01:59.754 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:59.754 source /etc/os-release 00:01:59.754 [[ -e /image.version ]] && img=$(< /image.version) 00:01:59.754 # Minimal, systemd-like check. 00:01:59.754 if [[ -e /.dockerenv ]]; then 00:01:59.754 # Clear garbage from the node's name: 00:01:59.754 # agt-er_autotest_547-896 -> autotest_547-896 00:01:59.754 # $HOSTNAME is the actual container id 00:01:59.754 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:59.754 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:59.754 # We can assume this is a mount from a host where container is running, 00:01:59.754 # so fetch its hostname to easily identify the target swarm worker. 00:01:59.754 container="$(< /etc/hostname) ($agent)" 00:01:59.754 else 00:01:59.754 # Fallback 00:01:59.754 container=$agent 00:01:59.754 fi 00:01:59.754 fi 00:01:59.754 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:59.754 00:02:00.026 [Pipeline] } 00:02:00.043 [Pipeline] // withEnv 00:02:00.053 [Pipeline] setCustomBuildProperty 00:02:00.070 [Pipeline] stage 00:02:00.073 [Pipeline] { (Tests) 00:02:00.095 [Pipeline] sh 00:02:00.379 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:00.653 [Pipeline] sh 00:02:00.938 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:01.215 [Pipeline] timeout 00:02:01.216 Timeout set to expire in 1 hr 0 min 00:02:01.218 [Pipeline] { 00:02:01.234 [Pipeline] sh 00:02:01.514 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:02.093 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:02.123 [Pipeline] sh 00:02:02.438 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:02.711 [Pipeline] sh 00:02:02.995 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:03.273 [Pipeline] sh 00:02:03.558 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:03.818 ++ readlink -f spdk_repo 00:02:03.818 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:03.818 + [[ -n /home/vagrant/spdk_repo ]] 00:02:03.818 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:03.818 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:03.818 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:03.818 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:03.818 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:03.818 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:03.818 + cd /home/vagrant/spdk_repo 00:02:03.818 + source /etc/os-release 00:02:03.818 ++ NAME='Fedora Linux' 00:02:03.818 ++ VERSION='39 (Cloud Edition)' 00:02:03.818 ++ ID=fedora 00:02:03.818 ++ VERSION_ID=39 00:02:03.818 ++ VERSION_CODENAME= 00:02:03.818 ++ PLATFORM_ID=platform:f39 00:02:03.818 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:03.818 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:03.818 ++ LOGO=fedora-logo-icon 00:02:03.818 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:03.818 ++ HOME_URL=https://fedoraproject.org/ 00:02:03.818 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:03.818 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:03.818 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:03.818 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:03.818 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:03.818 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:03.818 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:03.818 ++ SUPPORT_END=2024-11-12 00:02:03.818 ++ VARIANT='Cloud Edition' 00:02:03.818 ++ VARIANT_ID=cloud 00:02:03.818 + uname -a 00:02:03.818 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:03.818 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:03.818 Hugepages 00:02:03.818 node hugesize free / total 00:02:03.818 node0 1048576kB 0 / 0 00:02:03.818 node0 2048kB 0 / 0 00:02:03.818 00:02:03.818 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:03.818 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:03.818 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:04.078 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:04.078 + rm -f /tmp/spdk-ld-path 00:02:04.078 + source autorun-spdk.conf 00:02:04.078 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.078 ++ SPDK_TEST_NVMF=1 00:02:04.078 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:04.078 ++ SPDK_TEST_USDT=1 00:02:04.078 ++ SPDK_RUN_UBSAN=1 00:02:04.078 ++ SPDK_TEST_NVMF_MDNS=1 00:02:04.078 ++ NET_TYPE=virt 00:02:04.078 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:04.078 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:04.078 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:04.078 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:04.078 ++ RUN_NIGHTLY=1 00:02:04.078 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:04.078 + [[ -n '' ]] 00:02:04.078 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:04.078 + for M in /var/spdk/build-*-manifest.txt 00:02:04.078 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:04.078 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:04.078 + for M in /var/spdk/build-*-manifest.txt 00:02:04.078 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:04.078 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:04.078 + for M in /var/spdk/build-*-manifest.txt 00:02:04.078 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:04.078 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:04.078 ++ uname 00:02:04.078 + [[ Linux == \L\i\n\u\x ]] 00:02:04.078 + sudo dmesg -T 00:02:04.078 + sudo dmesg --clear 00:02:04.078 + dmesg_pid=5964 00:02:04.078 + [[ Fedora Linux == FreeBSD ]] 00:02:04.078 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:04.078 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:04.078 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:04.078 + sudo dmesg -Tw 00:02:04.078 + [[ -x /usr/src/fio-static/fio ]] 00:02:04.078 + export FIO_BIN=/usr/src/fio-static/fio 00:02:04.078 + FIO_BIN=/usr/src/fio-static/fio 00:02:04.078 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:04.078 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:04.078 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:04.078 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:04.078 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:04.078 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:04.078 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:04.078 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:04.078 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:04.078 Test configuration: 00:02:04.078 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.078 SPDK_TEST_NVMF=1 00:02:04.078 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:04.078 SPDK_TEST_USDT=1 00:02:04.078 SPDK_RUN_UBSAN=1 00:02:04.078 SPDK_TEST_NVMF_MDNS=1 00:02:04.078 NET_TYPE=virt 00:02:04.078 SPDK_JSONRPC_GO_CLIENT=1 00:02:04.078 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:04.078 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:04.078 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:04.338 RUN_NIGHTLY=1 22:22:04 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:04.338 22:22:04 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:04.338 22:22:04 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:04.338 22:22:04 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:04.338 22:22:04 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:04.338 22:22:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.338 22:22:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.338 22:22:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.338 22:22:04 -- paths/export.sh@5 -- $ export PATH 00:02:04.338 22:22:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.338 22:22:04 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:04.338 22:22:04 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:04.338 22:22:04 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732141324.XXXXXX 00:02:04.338 22:22:04 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732141324.Xuov5r 00:02:04.338 22:22:04 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:04.338 22:22:04 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:02:04.338 22:22:04 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:04.338 22:22:04 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:04.338 22:22:04 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:04.338 22:22:04 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:04.338 22:22:04 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:04.338 22:22:04 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:04.338 22:22:04 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.338 22:22:04 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:02:04.338 22:22:04 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:04.338 22:22:04 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:04.338 22:22:04 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:04.338 22:22:04 -- spdk/autobuild.sh@16 -- $ date -u 00:02:04.338 Wed Nov 20 10:22:04 PM UTC 2024 00:02:04.338 22:22:04 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:04.338 LTS-67-gc13c99a5e 00:02:04.338 22:22:04 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:04.338 22:22:04 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:04.338 22:22:04 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:04.338 22:22:04 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:04.338 22:22:04 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:04.338 22:22:04 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.338 ************************************ 00:02:04.338 START TEST ubsan 00:02:04.338 ************************************ 00:02:04.338 22:22:04 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:04.338 using ubsan 00:02:04.338 00:02:04.338 real 0m0.001s 00:02:04.338 user 0m0.000s 00:02:04.338 sys 0m0.000s 00:02:04.338 22:22:04 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:04.338 ************************************ 00:02:04.338 END TEST ubsan 00:02:04.338 ************************************ 00:02:04.338 22:22:04 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.338 22:22:04 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:04.338 22:22:04 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:04.338 22:22:04 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:04.338 22:22:04 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:04.338 22:22:04 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:04.338 22:22:04 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.338 ************************************ 00:02:04.338 START TEST build_native_dpdk 00:02:04.338 ************************************ 00:02:04.338 22:22:04 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:04.338 22:22:04 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:04.338 22:22:04 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:04.338 22:22:04 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:04.338 22:22:04 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:04.338 22:22:04 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:04.338 22:22:04 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:04.338 22:22:04 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:04.338 22:22:04 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:04.338 22:22:04 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:04.338 22:22:04 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:04.338 22:22:04 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:04.338 22:22:04 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:04.338 22:22:04 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:04.338 22:22:04 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:04.338 22:22:04 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:04.338 22:22:04 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:04.338 22:22:04 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:04.338 22:22:04 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:04.338 22:22:04 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:04.338 22:22:04 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:04.338 caf0f5d395 version: 22.11.4 00:02:04.338 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:04.338 dc9c799c7d vhost: fix missing spinlock unlock 00:02:04.338 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:04.338 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:04.338 22:22:04 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:04.338 22:22:04 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:04.338 22:22:04 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:04.338 22:22:04 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:04.338 22:22:04 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:04.338 22:22:04 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:04.338 22:22:04 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:04.338 22:22:04 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:04.338 22:22:04 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:04.338 22:22:04 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:04.338 22:22:04 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:04.338 22:22:04 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:04.338 22:22:04 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:04.338 22:22:04 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:04.338 22:22:04 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:04.338 22:22:04 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:04.338 22:22:04 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:04.338 22:22:04 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:04.338 22:22:04 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:04.339 22:22:04 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:04.339 22:22:04 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:04.339 22:22:04 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:04.339 22:22:04 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:04.339 22:22:04 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:04.339 22:22:04 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:04.339 22:22:04 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:04.339 22:22:04 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:04.339 22:22:04 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:04.339 22:22:04 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:04.339 22:22:04 -- scripts/common.sh@343 -- $ case "$op" in 00:02:04.339 22:22:04 -- scripts/common.sh@344 -- $ : 1 00:02:04.339 22:22:04 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:04.339 22:22:04 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:04.339 22:22:04 -- scripts/common.sh@364 -- $ decimal 22 00:02:04.339 22:22:04 -- scripts/common.sh@352 -- $ local d=22 00:02:04.339 22:22:04 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:04.339 22:22:04 -- scripts/common.sh@354 -- $ echo 22 00:02:04.339 22:22:04 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:04.339 22:22:04 -- scripts/common.sh@365 -- $ decimal 21 00:02:04.339 22:22:04 -- scripts/common.sh@352 -- $ local d=21 00:02:04.339 22:22:04 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:04.339 22:22:04 -- scripts/common.sh@354 -- $ echo 21 00:02:04.339 22:22:04 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:04.339 22:22:04 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:04.339 22:22:04 -- scripts/common.sh@366 -- $ return 1 00:02:04.339 22:22:04 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:04.339 patching file config/rte_config.h 00:02:04.339 Hunk #1 succeeded at 60 (offset 1 line). 00:02:04.339 22:22:04 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:04.339 22:22:04 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:04.339 22:22:04 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:04.339 22:22:04 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:04.339 22:22:04 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:04.339 22:22:04 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:04.339 22:22:04 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:04.339 22:22:04 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:04.339 22:22:04 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:04.339 22:22:05 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:04.339 22:22:05 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:04.339 22:22:05 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:04.339 22:22:05 -- scripts/common.sh@343 -- $ case "$op" in 00:02:04.339 22:22:05 -- scripts/common.sh@344 -- $ : 1 00:02:04.339 22:22:05 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:04.339 22:22:05 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:04.339 22:22:05 -- scripts/common.sh@364 -- $ decimal 22 00:02:04.339 22:22:05 -- scripts/common.sh@352 -- $ local d=22 00:02:04.339 22:22:05 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:04.339 22:22:05 -- scripts/common.sh@354 -- $ echo 22 00:02:04.339 22:22:05 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:04.339 22:22:05 -- scripts/common.sh@365 -- $ decimal 24 00:02:04.339 22:22:05 -- scripts/common.sh@352 -- $ local d=24 00:02:04.339 22:22:05 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:04.339 22:22:05 -- scripts/common.sh@354 -- $ echo 24 00:02:04.339 22:22:05 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:04.339 22:22:05 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:04.339 22:22:05 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:04.339 22:22:05 -- scripts/common.sh@367 -- $ return 0 00:02:04.339 22:22:05 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:04.339 patching file lib/pcapng/rte_pcapng.c 00:02:04.339 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:04.339 22:22:05 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:04.339 22:22:05 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:04.339 22:22:05 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:04.339 22:22:05 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:04.339 22:22:05 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:09.617 The Meson build system 00:02:09.617 Version: 1.5.0 00:02:09.617 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:09.617 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:09.617 Build type: native build 00:02:09.617 Program cat found: YES (/usr/bin/cat) 00:02:09.617 Project name: DPDK 00:02:09.617 Project version: 22.11.4 00:02:09.617 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:09.618 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:09.618 Host machine cpu family: x86_64 00:02:09.618 Host machine cpu: x86_64 00:02:09.618 Message: ## Building in Developer Mode ## 00:02:09.618 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:09.618 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:09.618 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:09.618 Program objdump found: YES (/usr/bin/objdump) 00:02:09.618 Program python3 found: YES (/usr/bin/python3) 00:02:09.618 Program cat found: YES (/usr/bin/cat) 00:02:09.618 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:09.618 Checking for size of "void *" : 8 00:02:09.618 Checking for size of "void *" : 8 (cached) 00:02:09.618 Library m found: YES 00:02:09.618 Library numa found: YES 00:02:09.618 Has header "numaif.h" : YES 00:02:09.618 Library fdt found: NO 00:02:09.618 Library execinfo found: NO 00:02:09.618 Has header "execinfo.h" : YES 00:02:09.618 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:09.618 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:09.618 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:09.618 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:09.618 Run-time dependency openssl found: YES 3.1.1 00:02:09.618 Run-time dependency libpcap found: YES 1.10.4 00:02:09.618 Has header "pcap.h" with dependency libpcap: YES 00:02:09.618 Compiler for C supports arguments -Wcast-qual: YES 00:02:09.618 Compiler for C supports arguments -Wdeprecated: YES 00:02:09.618 Compiler for C supports arguments -Wformat: YES 00:02:09.618 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:09.618 Compiler for C supports arguments -Wformat-security: NO 00:02:09.618 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:09.618 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:09.618 Compiler for C supports arguments -Wnested-externs: YES 00:02:09.618 Compiler for C supports arguments -Wold-style-definition: YES 00:02:09.618 Compiler for C supports arguments -Wpointer-arith: YES 00:02:09.618 Compiler for C supports arguments -Wsign-compare: YES 00:02:09.618 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:09.618 Compiler for C supports arguments -Wundef: YES 00:02:09.618 Compiler for C supports arguments -Wwrite-strings: YES 00:02:09.618 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:09.618 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:09.618 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:09.618 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:09.618 Compiler for C supports arguments -mavx512f: YES 00:02:09.618 Checking if "AVX512 checking" compiles: YES 00:02:09.618 Fetching value of define "__SSE4_2__" : 1 00:02:09.618 Fetching value of define "__AES__" : 1 00:02:09.618 Fetching value of define "__AVX__" : 1 00:02:09.618 Fetching value of define "__AVX2__" : 1 00:02:09.618 Fetching value of define "__AVX512BW__" : (undefined) 00:02:09.618 Fetching value of define "__AVX512CD__" : (undefined) 00:02:09.618 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:09.618 Fetching value of define "__AVX512F__" : (undefined) 00:02:09.618 Fetching value of define "__AVX512VL__" : (undefined) 00:02:09.618 Fetching value of define "__PCLMUL__" : 1 00:02:09.618 Fetching value of define "__RDRND__" : 1 00:02:09.618 Fetching value of define "__RDSEED__" : 1 00:02:09.618 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:09.618 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:09.618 Message: lib/kvargs: Defining dependency "kvargs" 00:02:09.618 Message: lib/telemetry: Defining dependency "telemetry" 00:02:09.618 Checking for function "getentropy" : YES 00:02:09.618 Message: lib/eal: Defining dependency "eal" 00:02:09.618 Message: lib/ring: Defining dependency "ring" 00:02:09.618 Message: lib/rcu: Defining dependency "rcu" 00:02:09.618 Message: lib/mempool: Defining dependency "mempool" 00:02:09.618 Message: lib/mbuf: Defining dependency "mbuf" 00:02:09.618 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:09.618 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:09.618 Compiler for C supports arguments -mpclmul: YES 00:02:09.618 Compiler for C supports arguments -maes: YES 00:02:09.618 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:09.618 Compiler for C supports arguments -mavx512bw: YES 00:02:09.618 Compiler for C supports arguments -mavx512dq: YES 00:02:09.618 Compiler for C supports arguments -mavx512vl: YES 00:02:09.618 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:09.618 Compiler for C supports arguments -mavx2: YES 00:02:09.618 Compiler for C supports arguments -mavx: YES 00:02:09.618 Message: lib/net: Defining dependency "net" 00:02:09.618 Message: lib/meter: Defining dependency "meter" 00:02:09.618 Message: lib/ethdev: Defining dependency "ethdev" 00:02:09.618 Message: lib/pci: Defining dependency "pci" 00:02:09.618 Message: lib/cmdline: Defining dependency "cmdline" 00:02:09.618 Message: lib/metrics: Defining dependency "metrics" 00:02:09.618 Message: lib/hash: Defining dependency "hash" 00:02:09.618 Message: lib/timer: Defining dependency "timer" 00:02:09.618 Fetching value of define "__AVX2__" : 1 (cached) 00:02:09.618 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:09.618 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:09.618 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:09.618 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:09.618 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:09.618 Message: lib/acl: Defining dependency "acl" 00:02:09.618 Message: lib/bbdev: Defining dependency "bbdev" 00:02:09.618 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:09.618 Run-time dependency libelf found: YES 0.191 00:02:09.618 Message: lib/bpf: Defining dependency "bpf" 00:02:09.618 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:09.618 Message: lib/compressdev: Defining dependency "compressdev" 00:02:09.618 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:09.618 Message: lib/distributor: Defining dependency "distributor" 00:02:09.618 Message: lib/efd: Defining dependency "efd" 00:02:09.618 Message: lib/eventdev: Defining dependency "eventdev" 00:02:09.618 Message: lib/gpudev: Defining dependency "gpudev" 00:02:09.618 Message: lib/gro: Defining dependency "gro" 00:02:09.618 Message: lib/gso: Defining dependency "gso" 00:02:09.618 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:09.618 Message: lib/jobstats: Defining dependency "jobstats" 00:02:09.618 Message: lib/latencystats: Defining dependency "latencystats" 00:02:09.618 Message: lib/lpm: Defining dependency "lpm" 00:02:09.618 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:09.618 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:09.618 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:09.618 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:09.618 Message: lib/member: Defining dependency "member" 00:02:09.618 Message: lib/pcapng: Defining dependency "pcapng" 00:02:09.618 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:09.618 Message: lib/power: Defining dependency "power" 00:02:09.618 Message: lib/rawdev: Defining dependency "rawdev" 00:02:09.618 Message: lib/regexdev: Defining dependency "regexdev" 00:02:09.618 Message: lib/dmadev: Defining dependency "dmadev" 00:02:09.618 Message: lib/rib: Defining dependency "rib" 00:02:09.618 Message: lib/reorder: Defining dependency "reorder" 00:02:09.618 Message: lib/sched: Defining dependency "sched" 00:02:09.618 Message: lib/security: Defining dependency "security" 00:02:09.618 Message: lib/stack: Defining dependency "stack" 00:02:09.618 Has header "linux/userfaultfd.h" : YES 00:02:09.618 Message: lib/vhost: Defining dependency "vhost" 00:02:09.618 Message: lib/ipsec: Defining dependency "ipsec" 00:02:09.618 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:09.618 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:09.618 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:09.618 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:09.618 Message: lib/fib: Defining dependency "fib" 00:02:09.618 Message: lib/port: Defining dependency "port" 00:02:09.618 Message: lib/pdump: Defining dependency "pdump" 00:02:09.618 Message: lib/table: Defining dependency "table" 00:02:09.618 Message: lib/pipeline: Defining dependency "pipeline" 00:02:09.618 Message: lib/graph: Defining dependency "graph" 00:02:09.618 Message: lib/node: Defining dependency "node" 00:02:09.618 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:09.618 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:09.618 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:09.618 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:09.618 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:09.618 Compiler for C supports arguments -Wno-unused-value: YES 00:02:09.618 Compiler for C supports arguments -Wno-format: YES 00:02:09.618 Compiler for C supports arguments -Wno-format-security: YES 00:02:09.618 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:11.000 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:11.000 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:11.000 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:11.000 Fetching value of define "__AVX2__" : 1 (cached) 00:02:11.000 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:11.000 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:11.000 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:11.000 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:11.000 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:11.000 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:11.000 Configuring doxy-api.conf using configuration 00:02:11.000 Program sphinx-build found: NO 00:02:11.000 Configuring rte_build_config.h using configuration 00:02:11.000 Message: 00:02:11.000 ================= 00:02:11.000 Applications Enabled 00:02:11.000 ================= 00:02:11.000 00:02:11.000 apps: 00:02:11.000 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:11.000 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:11.000 test-security-perf, 00:02:11.000 00:02:11.000 Message: 00:02:11.000 ================= 00:02:11.000 Libraries Enabled 00:02:11.000 ================= 00:02:11.000 00:02:11.000 libs: 00:02:11.000 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:11.000 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:11.000 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:11.000 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:11.000 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:11.000 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:11.000 table, pipeline, graph, node, 00:02:11.000 00:02:11.000 Message: 00:02:11.000 =============== 00:02:11.000 Drivers Enabled 00:02:11.000 =============== 00:02:11.000 00:02:11.000 common: 00:02:11.000 00:02:11.000 bus: 00:02:11.000 pci, vdev, 00:02:11.000 mempool: 00:02:11.000 ring, 00:02:11.000 dma: 00:02:11.001 00:02:11.001 net: 00:02:11.001 i40e, 00:02:11.001 raw: 00:02:11.001 00:02:11.001 crypto: 00:02:11.001 00:02:11.001 compress: 00:02:11.001 00:02:11.001 regex: 00:02:11.001 00:02:11.001 vdpa: 00:02:11.001 00:02:11.001 event: 00:02:11.001 00:02:11.001 baseband: 00:02:11.001 00:02:11.001 gpu: 00:02:11.001 00:02:11.001 00:02:11.001 Message: 00:02:11.001 ================= 00:02:11.001 Content Skipped 00:02:11.001 ================= 00:02:11.001 00:02:11.001 apps: 00:02:11.001 00:02:11.001 libs: 00:02:11.001 kni: explicitly disabled via build config (deprecated lib) 00:02:11.001 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:11.001 00:02:11.001 drivers: 00:02:11.001 common/cpt: not in enabled drivers build config 00:02:11.001 common/dpaax: not in enabled drivers build config 00:02:11.001 common/iavf: not in enabled drivers build config 00:02:11.001 common/idpf: not in enabled drivers build config 00:02:11.001 common/mvep: not in enabled drivers build config 00:02:11.001 common/octeontx: not in enabled drivers build config 00:02:11.001 bus/auxiliary: not in enabled drivers build config 00:02:11.001 bus/dpaa: not in enabled drivers build config 00:02:11.001 bus/fslmc: not in enabled drivers build config 00:02:11.001 bus/ifpga: not in enabled drivers build config 00:02:11.001 bus/vmbus: not in enabled drivers build config 00:02:11.001 common/cnxk: not in enabled drivers build config 00:02:11.001 common/mlx5: not in enabled drivers build config 00:02:11.001 common/qat: not in enabled drivers build config 00:02:11.001 common/sfc_efx: not in enabled drivers build config 00:02:11.001 mempool/bucket: not in enabled drivers build config 00:02:11.001 mempool/cnxk: not in enabled drivers build config 00:02:11.001 mempool/dpaa: not in enabled drivers build config 00:02:11.001 mempool/dpaa2: not in enabled drivers build config 00:02:11.001 mempool/octeontx: not in enabled drivers build config 00:02:11.001 mempool/stack: not in enabled drivers build config 00:02:11.001 dma/cnxk: not in enabled drivers build config 00:02:11.001 dma/dpaa: not in enabled drivers build config 00:02:11.001 dma/dpaa2: not in enabled drivers build config 00:02:11.001 dma/hisilicon: not in enabled drivers build config 00:02:11.001 dma/idxd: not in enabled drivers build config 00:02:11.001 dma/ioat: not in enabled drivers build config 00:02:11.001 dma/skeleton: not in enabled drivers build config 00:02:11.001 net/af_packet: not in enabled drivers build config 00:02:11.001 net/af_xdp: not in enabled drivers build config 00:02:11.001 net/ark: not in enabled drivers build config 00:02:11.001 net/atlantic: not in enabled drivers build config 00:02:11.001 net/avp: not in enabled drivers build config 00:02:11.001 net/axgbe: not in enabled drivers build config 00:02:11.001 net/bnx2x: not in enabled drivers build config 00:02:11.001 net/bnxt: not in enabled drivers build config 00:02:11.001 net/bonding: not in enabled drivers build config 00:02:11.001 net/cnxk: not in enabled drivers build config 00:02:11.001 net/cxgbe: not in enabled drivers build config 00:02:11.001 net/dpaa: not in enabled drivers build config 00:02:11.001 net/dpaa2: not in enabled drivers build config 00:02:11.001 net/e1000: not in enabled drivers build config 00:02:11.001 net/ena: not in enabled drivers build config 00:02:11.001 net/enetc: not in enabled drivers build config 00:02:11.001 net/enetfec: not in enabled drivers build config 00:02:11.001 net/enic: not in enabled drivers build config 00:02:11.001 net/failsafe: not in enabled drivers build config 00:02:11.001 net/fm10k: not in enabled drivers build config 00:02:11.001 net/gve: not in enabled drivers build config 00:02:11.001 net/hinic: not in enabled drivers build config 00:02:11.001 net/hns3: not in enabled drivers build config 00:02:11.001 net/iavf: not in enabled drivers build config 00:02:11.001 net/ice: not in enabled drivers build config 00:02:11.001 net/idpf: not in enabled drivers build config 00:02:11.001 net/igc: not in enabled drivers build config 00:02:11.001 net/ionic: not in enabled drivers build config 00:02:11.001 net/ipn3ke: not in enabled drivers build config 00:02:11.001 net/ixgbe: not in enabled drivers build config 00:02:11.001 net/kni: not in enabled drivers build config 00:02:11.001 net/liquidio: not in enabled drivers build config 00:02:11.001 net/mana: not in enabled drivers build config 00:02:11.001 net/memif: not in enabled drivers build config 00:02:11.001 net/mlx4: not in enabled drivers build config 00:02:11.001 net/mlx5: not in enabled drivers build config 00:02:11.001 net/mvneta: not in enabled drivers build config 00:02:11.001 net/mvpp2: not in enabled drivers build config 00:02:11.001 net/netvsc: not in enabled drivers build config 00:02:11.001 net/nfb: not in enabled drivers build config 00:02:11.001 net/nfp: not in enabled drivers build config 00:02:11.001 net/ngbe: not in enabled drivers build config 00:02:11.001 net/null: not in enabled drivers build config 00:02:11.001 net/octeontx: not in enabled drivers build config 00:02:11.001 net/octeon_ep: not in enabled drivers build config 00:02:11.001 net/pcap: not in enabled drivers build config 00:02:11.001 net/pfe: not in enabled drivers build config 00:02:11.001 net/qede: not in enabled drivers build config 00:02:11.001 net/ring: not in enabled drivers build config 00:02:11.001 net/sfc: not in enabled drivers build config 00:02:11.001 net/softnic: not in enabled drivers build config 00:02:11.001 net/tap: not in enabled drivers build config 00:02:11.001 net/thunderx: not in enabled drivers build config 00:02:11.001 net/txgbe: not in enabled drivers build config 00:02:11.001 net/vdev_netvsc: not in enabled drivers build config 00:02:11.001 net/vhost: not in enabled drivers build config 00:02:11.001 net/virtio: not in enabled drivers build config 00:02:11.001 net/vmxnet3: not in enabled drivers build config 00:02:11.001 raw/cnxk_bphy: not in enabled drivers build config 00:02:11.001 raw/cnxk_gpio: not in enabled drivers build config 00:02:11.001 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:11.001 raw/ifpga: not in enabled drivers build config 00:02:11.001 raw/ntb: not in enabled drivers build config 00:02:11.001 raw/skeleton: not in enabled drivers build config 00:02:11.001 crypto/armv8: not in enabled drivers build config 00:02:11.001 crypto/bcmfs: not in enabled drivers build config 00:02:11.001 crypto/caam_jr: not in enabled drivers build config 00:02:11.001 crypto/ccp: not in enabled drivers build config 00:02:11.001 crypto/cnxk: not in enabled drivers build config 00:02:11.001 crypto/dpaa_sec: not in enabled drivers build config 00:02:11.001 crypto/dpaa2_sec: not in enabled drivers build config 00:02:11.001 crypto/ipsec_mb: not in enabled drivers build config 00:02:11.001 crypto/mlx5: not in enabled drivers build config 00:02:11.001 crypto/mvsam: not in enabled drivers build config 00:02:11.001 crypto/nitrox: not in enabled drivers build config 00:02:11.001 crypto/null: not in enabled drivers build config 00:02:11.001 crypto/octeontx: not in enabled drivers build config 00:02:11.001 crypto/openssl: not in enabled drivers build config 00:02:11.001 crypto/scheduler: not in enabled drivers build config 00:02:11.001 crypto/uadk: not in enabled drivers build config 00:02:11.001 crypto/virtio: not in enabled drivers build config 00:02:11.001 compress/isal: not in enabled drivers build config 00:02:11.001 compress/mlx5: not in enabled drivers build config 00:02:11.001 compress/octeontx: not in enabled drivers build config 00:02:11.001 compress/zlib: not in enabled drivers build config 00:02:11.001 regex/mlx5: not in enabled drivers build config 00:02:11.001 regex/cn9k: not in enabled drivers build config 00:02:11.001 vdpa/ifc: not in enabled drivers build config 00:02:11.001 vdpa/mlx5: not in enabled drivers build config 00:02:11.001 vdpa/sfc: not in enabled drivers build config 00:02:11.001 event/cnxk: not in enabled drivers build config 00:02:11.001 event/dlb2: not in enabled drivers build config 00:02:11.001 event/dpaa: not in enabled drivers build config 00:02:11.001 event/dpaa2: not in enabled drivers build config 00:02:11.001 event/dsw: not in enabled drivers build config 00:02:11.001 event/opdl: not in enabled drivers build config 00:02:11.001 event/skeleton: not in enabled drivers build config 00:02:11.001 event/sw: not in enabled drivers build config 00:02:11.001 event/octeontx: not in enabled drivers build config 00:02:11.001 baseband/acc: not in enabled drivers build config 00:02:11.001 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:11.001 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:11.001 baseband/la12xx: not in enabled drivers build config 00:02:11.001 baseband/null: not in enabled drivers build config 00:02:11.001 baseband/turbo_sw: not in enabled drivers build config 00:02:11.001 gpu/cuda: not in enabled drivers build config 00:02:11.001 00:02:11.001 00:02:11.001 Build targets in project: 314 00:02:11.001 00:02:11.001 DPDK 22.11.4 00:02:11.001 00:02:11.001 User defined options 00:02:11.001 libdir : lib 00:02:11.001 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:11.001 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:11.001 c_link_args : 00:02:11.001 enable_docs : false 00:02:11.001 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:11.001 enable_kmods : false 00:02:11.001 machine : native 00:02:11.001 tests : false 00:02:11.001 00:02:11.001 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:11.001 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:11.261 22:22:11 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:11.261 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:11.261 [1/743] Generating lib/rte_telemetry_def with a custom command 00:02:11.261 [2/743] Generating lib/rte_kvargs_def with a custom command 00:02:11.261 [3/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:11.261 [4/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:11.261 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:11.261 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:11.261 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:11.261 [8/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:11.519 [9/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:11.519 [10/743] Linking static target lib/librte_kvargs.a 00:02:11.519 [11/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:11.519 [12/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:11.519 [13/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:11.519 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:11.519 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:11.519 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:11.519 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:11.519 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:11.519 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:11.519 [20/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.778 [21/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:11.778 [22/743] Linking target lib/librte_kvargs.so.23.0 00:02:11.778 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:11.778 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:11.778 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:11.778 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:11.778 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:11.778 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:11.778 [29/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:11.778 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:11.778 [31/743] Linking static target lib/librte_telemetry.a 00:02:11.778 [32/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:11.778 [33/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:11.778 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:12.037 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:12.037 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:12.037 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:12.037 [38/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:12.037 [39/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:12.037 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:12.037 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:12.037 [42/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.037 [43/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:12.296 [44/743] Linking target lib/librte_telemetry.so.23.0 00:02:12.296 [45/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:12.296 [46/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:12.296 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:12.296 [48/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:12.296 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:12.296 [50/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:12.296 [51/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:12.296 [52/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:12.296 [53/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:12.296 [54/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:12.296 [55/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:12.296 [56/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:12.555 [57/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:12.555 [58/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:12.555 [59/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:12.555 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:12.555 [61/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:12.555 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:12.555 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:12.555 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:12.555 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:12.555 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:12.555 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:12.555 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:12.555 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:12.555 [70/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:12.813 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:12.813 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:12.813 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:12.813 [74/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:12.813 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:12.813 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:12.813 [77/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:12.813 [78/743] Generating lib/rte_eal_def with a custom command 00:02:12.813 [79/743] Generating lib/rte_eal_mingw with a custom command 00:02:12.813 [80/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:12.813 [81/743] Generating lib/rte_ring_def with a custom command 00:02:12.813 [82/743] Generating lib/rte_ring_mingw with a custom command 00:02:12.813 [83/743] Generating lib/rte_rcu_def with a custom command 00:02:12.813 [84/743] Generating lib/rte_rcu_mingw with a custom command 00:02:12.813 [85/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:12.813 [86/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:12.813 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:12.813 [88/743] Linking static target lib/librte_ring.a 00:02:13.072 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:13.072 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:13.072 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:13.072 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:13.072 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:13.072 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.331 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:13.331 [96/743] Linking static target lib/librte_eal.a 00:02:13.331 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:13.331 [98/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:13.331 [99/743] Generating lib/rte_mbuf_def with a custom command 00:02:13.331 [100/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:13.331 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:13.591 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:13.591 [103/743] Linking static target lib/librte_rcu.a 00:02:13.591 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:13.591 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:13.850 [106/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.850 [107/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:13.850 [108/743] Linking static target lib/librte_mempool.a 00:02:13.850 [109/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:13.850 [110/743] Generating lib/rte_net_def with a custom command 00:02:13.850 [111/743] Generating lib/rte_net_mingw with a custom command 00:02:13.850 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:13.850 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:14.110 [114/743] Generating lib/rte_meter_def with a custom command 00:02:14.110 [115/743] Generating lib/rte_meter_mingw with a custom command 00:02:14.110 [116/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:14.110 [117/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:14.110 [118/743] Linking static target lib/librte_meter.a 00:02:14.110 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:14.369 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:14.369 [121/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.369 [122/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:14.369 [123/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:14.369 [124/743] Linking static target lib/librte_net.a 00:02:14.369 [125/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:14.369 [126/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.369 [127/743] Linking static target lib/librte_mbuf.a 00:02:14.628 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.628 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:14.628 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:14.888 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:14.888 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:14.888 [133/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:15.147 [134/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.147 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:15.406 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:15.406 [137/743] Generating lib/rte_ethdev_def with a custom command 00:02:15.406 [138/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:15.406 [139/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:15.406 [140/743] Generating lib/rte_pci_def with a custom command 00:02:15.406 [141/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:15.406 [142/743] Generating lib/rte_pci_mingw with a custom command 00:02:15.406 [143/743] Linking static target lib/librte_pci.a 00:02:15.406 [144/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:15.666 [145/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:15.666 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:15.666 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:15.666 [148/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:15.666 [149/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:15.666 [150/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.666 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:15.666 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:15.666 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:15.925 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:15.925 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:15.925 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:15.925 [157/743] Generating lib/rte_cmdline_def with a custom command 00:02:15.925 [158/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:15.925 [159/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:15.925 [160/743] Generating lib/rte_metrics_def with a custom command 00:02:15.925 [161/743] Generating lib/rte_metrics_mingw with a custom command 00:02:15.925 [162/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:15.925 [163/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:15.925 [164/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:15.925 [165/743] Generating lib/rte_hash_def with a custom command 00:02:16.184 [166/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:16.184 [167/743] Generating lib/rte_hash_mingw with a custom command 00:02:16.184 [168/743] Generating lib/rte_timer_def with a custom command 00:02:16.184 [169/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:16.184 [170/743] Generating lib/rte_timer_mingw with a custom command 00:02:16.184 [171/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:16.184 [172/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:16.184 [173/743] Linking static target lib/librte_cmdline.a 00:02:16.444 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:16.444 [175/743] Linking static target lib/librte_metrics.a 00:02:16.444 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:16.444 [177/743] Linking static target lib/librte_timer.a 00:02:16.703 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.962 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.962 [180/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:16.962 [181/743] Linking static target lib/librte_ethdev.a 00:02:16.962 [182/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:16.962 [183/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:17.221 [184/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.480 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:17.480 [186/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:17.480 [187/743] Generating lib/rte_acl_def with a custom command 00:02:17.480 [188/743] Generating lib/rte_acl_mingw with a custom command 00:02:17.480 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:17.480 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:17.480 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:17.480 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:17.739 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:17.739 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:18.308 [195/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:18.308 [196/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:18.308 [197/743] Linking static target lib/librte_bitratestats.a 00:02:18.308 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.308 [199/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:18.308 [200/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:18.308 [201/743] Linking static target lib/librte_bbdev.a 00:02:18.567 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:18.567 [203/743] Linking static target lib/librte_hash.a 00:02:18.825 [204/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:18.825 [205/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:18.825 [206/743] Linking static target lib/acl/libavx512_tmp.a 00:02:18.825 [207/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:18.825 [208/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.084 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:19.344 [210/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:19.344 [211/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:19.344 [212/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.344 [213/743] Generating lib/rte_bpf_def with a custom command 00:02:19.344 [214/743] Generating lib/rte_bpf_mingw with a custom command 00:02:19.344 [215/743] Generating lib/rte_cfgfile_def with a custom command 00:02:19.344 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:19.603 [217/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:19.603 [218/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:19.603 [219/743] Linking static target lib/librte_cfgfile.a 00:02:19.603 [220/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:19.603 [221/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:19.603 [222/743] Linking static target lib/librte_acl.a 00:02:19.862 [223/743] Generating lib/rte_compressdev_def with a custom command 00:02:19.862 [224/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:19.862 [225/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:19.862 [226/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.862 [227/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:19.862 [228/743] Generating lib/rte_cryptodev_def with a custom command 00:02:19.862 [229/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.862 [230/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:19.862 [231/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.121 [232/743] Linking target lib/librte_eal.so.23.0 00:02:20.121 [233/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:20.121 [234/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:20.121 [235/743] Linking target lib/librte_ring.so.23.0 00:02:20.121 [236/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:20.121 [237/743] Linking target lib/librte_meter.so.23.0 00:02:20.380 [238/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:20.380 [239/743] Linking target lib/librte_rcu.so.23.0 00:02:20.380 [240/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:20.380 [241/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:20.380 [242/743] Linking target lib/librte_mempool.so.23.0 00:02:20.380 [243/743] Linking target lib/librte_pci.so.23.0 00:02:20.380 [244/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:20.380 [245/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:20.380 [246/743] Linking target lib/librte_timer.so.23.0 00:02:20.380 [247/743] Linking target lib/librte_acl.so.23.0 00:02:20.380 [248/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:20.380 [249/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:20.380 [250/743] Linking static target lib/librte_bpf.a 00:02:20.380 [251/743] Linking target lib/librte_mbuf.so.23.0 00:02:20.640 [252/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:20.640 [253/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:20.640 [254/743] Linking static target lib/librte_compressdev.a 00:02:20.640 [255/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:20.640 [256/743] Linking target lib/librte_cfgfile.so.23.0 00:02:20.640 [257/743] Generating lib/rte_distributor_def with a custom command 00:02:20.640 [258/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:20.640 [259/743] Generating lib/rte_distributor_mingw with a custom command 00:02:20.640 [260/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:20.640 [261/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:20.640 [262/743] Generating lib/rte_efd_def with a custom command 00:02:20.640 [263/743] Generating lib/rte_efd_mingw with a custom command 00:02:20.640 [264/743] Linking target lib/librte_bbdev.so.23.0 00:02:20.640 [265/743] Linking target lib/librte_net.so.23.0 00:02:20.640 [266/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.899 [267/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:20.899 [268/743] Linking target lib/librte_cmdline.so.23.0 00:02:20.899 [269/743] Linking target lib/librte_hash.so.23.0 00:02:20.899 [270/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:20.899 [271/743] Linking static target lib/librte_distributor.a 00:02:20.899 [272/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:21.158 [273/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.158 [274/743] Linking target lib/librte_distributor.so.23.0 00:02:21.158 [275/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.158 [276/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:21.158 [277/743] Linking target lib/librte_ethdev.so.23.0 00:02:21.417 [278/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:21.417 [279/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:21.417 [280/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.417 [281/743] Linking target lib/librte_metrics.so.23.0 00:02:21.417 [282/743] Linking target lib/librte_bpf.so.23.0 00:02:21.417 [283/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:21.417 [284/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:21.677 [285/743] Linking target lib/librte_bitratestats.so.23.0 00:02:21.677 [286/743] Linking target lib/librte_compressdev.so.23.0 00:02:21.677 [287/743] Generating lib/rte_eventdev_def with a custom command 00:02:21.677 [288/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:21.677 [289/743] Generating lib/rte_gpudev_def with a custom command 00:02:21.677 [290/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:21.677 [291/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:21.936 [292/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:21.936 [293/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:21.936 [294/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:22.194 [295/743] Linking static target lib/librte_efd.a 00:02:22.194 [296/743] Linking static target lib/librte_cryptodev.a 00:02:22.194 [297/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.194 [298/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:22.454 [299/743] Linking target lib/librte_efd.so.23.0 00:02:22.454 [300/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:22.454 [301/743] Generating lib/rte_gro_def with a custom command 00:02:22.454 [302/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:22.454 [303/743] Generating lib/rte_gro_mingw with a custom command 00:02:22.454 [304/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:22.454 [305/743] Linking static target lib/librte_gpudev.a 00:02:22.454 [306/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:22.713 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:22.713 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:22.970 [309/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:22.970 [310/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:22.970 [311/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:22.971 [312/743] Generating lib/rte_gso_def with a custom command 00:02:22.971 [313/743] Linking static target lib/librte_gro.a 00:02:23.229 [314/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:23.229 [315/743] Generating lib/rte_gso_mingw with a custom command 00:02:23.229 [316/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.229 [317/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:23.229 [318/743] Linking target lib/librte_gpudev.so.23.0 00:02:23.229 [319/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:23.229 [320/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:23.229 [321/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.229 [322/743] Linking target lib/librte_gro.so.23.0 00:02:23.487 [323/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:23.487 [324/743] Generating lib/rte_ip_frag_def with a custom command 00:02:23.487 [325/743] Linking static target lib/librte_eventdev.a 00:02:23.487 [326/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:23.487 [327/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:23.487 [328/743] Linking static target lib/librte_jobstats.a 00:02:23.487 [329/743] Generating lib/rte_jobstats_def with a custom command 00:02:23.487 [330/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:23.487 [331/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:23.487 [332/743] Linking static target lib/librte_gso.a 00:02:23.746 [333/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.746 [334/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:23.746 [335/743] Linking target lib/librte_gso.so.23.0 00:02:23.746 [336/743] Generating lib/rte_latencystats_def with a custom command 00:02:23.746 [337/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.004 [338/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:24.004 [339/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:24.004 [340/743] Linking target lib/librte_jobstats.so.23.0 00:02:24.004 [341/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:24.004 [342/743] Generating lib/rte_lpm_def with a custom command 00:02:24.004 [343/743] Generating lib/rte_lpm_mingw with a custom command 00:02:24.004 [344/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:24.004 [345/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:24.004 [346/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.004 [347/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:24.004 [348/743] Linking static target lib/librte_ip_frag.a 00:02:24.004 [349/743] Linking target lib/librte_cryptodev.so.23.0 00:02:24.262 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:24.262 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.262 [352/743] Linking target lib/librte_ip_frag.so.23.0 00:02:24.521 [353/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:24.521 [354/743] Generating lib/rte_member_def with a custom command 00:02:24.521 [355/743] Generating lib/rte_member_mingw with a custom command 00:02:24.521 [356/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:24.521 [357/743] Linking static target lib/librte_latencystats.a 00:02:24.521 [358/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:24.521 [359/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:24.521 [360/743] Generating lib/rte_pcapng_def with a custom command 00:02:24.521 [361/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:24.521 [362/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:24.779 [363/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:24.779 [364/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:24.779 [365/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.779 [366/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:24.779 [367/743] Linking target lib/librte_latencystats.so.23.0 00:02:24.779 [368/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:24.779 [369/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:24.779 [370/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:25.039 [371/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:25.039 [372/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.039 [373/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:25.039 [374/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:25.039 [375/743] Linking static target lib/librte_lpm.a 00:02:25.039 [376/743] Generating lib/rte_power_def with a custom command 00:02:25.308 [377/743] Linking target lib/librte_eventdev.so.23.0 00:02:25.308 [378/743] Generating lib/rte_power_mingw with a custom command 00:02:25.308 [379/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:25.308 [380/743] Generating lib/rte_rawdev_def with a custom command 00:02:25.308 [381/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:25.308 [382/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:25.308 [383/743] Generating lib/rte_regexdev_def with a custom command 00:02:25.308 [384/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:25.308 [385/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:25.586 [386/743] Linking static target lib/librte_pcapng.a 00:02:25.586 [387/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.586 [388/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:25.586 [389/743] Linking target lib/librte_lpm.so.23.0 00:02:25.586 [390/743] Generating lib/rte_dmadev_def with a custom command 00:02:25.586 [391/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:25.586 [392/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:25.586 [393/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:25.586 [394/743] Linking static target lib/librte_rawdev.a 00:02:25.586 [395/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:25.586 [396/743] Generating lib/rte_rib_def with a custom command 00:02:25.586 [397/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:25.586 [398/743] Generating lib/rte_rib_mingw with a custom command 00:02:25.586 [399/743] Generating lib/rte_reorder_def with a custom command 00:02:25.858 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:02:25.858 [401/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.858 [402/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:25.858 [403/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:25.858 [404/743] Linking target lib/librte_pcapng.so.23.0 00:02:25.858 [405/743] Linking static target lib/librte_dmadev.a 00:02:25.858 [406/743] Linking static target lib/librte_power.a 00:02:25.858 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:25.858 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.858 [409/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:26.117 [410/743] Linking target lib/librte_rawdev.so.23.0 00:02:26.117 [411/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:26.117 [412/743] Linking static target lib/librte_regexdev.a 00:02:26.117 [413/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:26.117 [414/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:26.117 [415/743] Linking static target lib/librte_member.a 00:02:26.117 [416/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:26.117 [417/743] Generating lib/rte_sched_def with a custom command 00:02:26.117 [418/743] Generating lib/rte_sched_mingw with a custom command 00:02:26.117 [419/743] Generating lib/rte_security_def with a custom command 00:02:26.117 [420/743] Generating lib/rte_security_mingw with a custom command 00:02:26.376 [421/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.376 [422/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:26.376 [423/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:26.376 [424/743] Linking target lib/librte_dmadev.so.23.0 00:02:26.376 [425/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:26.376 [426/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:26.376 [427/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.376 [428/743] Linking static target lib/librte_stack.a 00:02:26.376 [429/743] Generating lib/rte_stack_def with a custom command 00:02:26.376 [430/743] Generating lib/rte_stack_mingw with a custom command 00:02:26.376 [431/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:26.376 [432/743] Linking target lib/librte_member.so.23.0 00:02:26.376 [433/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:26.376 [434/743] Linking static target lib/librte_reorder.a 00:02:26.635 [435/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:26.635 [436/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.635 [437/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:26.635 [438/743] Linking static target lib/librte_rib.a 00:02:26.635 [439/743] Linking target lib/librte_stack.so.23.0 00:02:26.635 [440/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.635 [441/743] Linking target lib/librte_power.so.23.0 00:02:26.635 [442/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.635 [443/743] Linking target lib/librte_reorder.so.23.0 00:02:26.894 [444/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.894 [445/743] Linking target lib/librte_regexdev.so.23.0 00:02:26.894 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:26.894 [447/743] Linking static target lib/librte_security.a 00:02:26.894 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.151 [449/743] Linking target lib/librte_rib.so.23.0 00:02:27.151 [450/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:27.151 [451/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:27.151 [452/743] Generating lib/rte_vhost_def with a custom command 00:02:27.151 [453/743] Generating lib/rte_vhost_mingw with a custom command 00:02:27.151 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:27.409 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.409 [456/743] Linking target lib/librte_security.so.23.0 00:02:27.409 [457/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:27.409 [458/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:27.409 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:27.409 [460/743] Linking static target lib/librte_sched.a 00:02:27.975 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.975 [462/743] Linking target lib/librte_sched.so.23.0 00:02:27.975 [463/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:27.975 [464/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:27.975 [465/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:27.975 [466/743] Generating lib/rte_ipsec_def with a custom command 00:02:27.975 [467/743] Generating lib/rte_ipsec_mingw with a custom command 00:02:27.975 [468/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:28.233 [469/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:28.233 [470/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:28.233 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:28.491 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:28.491 [473/743] Generating lib/rte_fib_def with a custom command 00:02:28.491 [474/743] Generating lib/rte_fib_mingw with a custom command 00:02:28.491 [475/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:28.491 [476/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:28.491 [477/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:28.491 [478/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:28.749 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:28.749 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:28.749 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:28.749 [482/743] Linking static target lib/librte_ipsec.a 00:02:29.007 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.265 [484/743] Linking target lib/librte_ipsec.so.23.0 00:02:29.265 [485/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:29.265 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:29.265 [487/743] Linking static target lib/librte_fib.a 00:02:29.524 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:29.524 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:29.524 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:29.524 [491/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.524 [492/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:29.524 [493/743] Linking target lib/librte_fib.so.23.0 00:02:29.783 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:30.351 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:30.351 [496/743] Generating lib/rte_port_def with a custom command 00:02:30.351 [497/743] Generating lib/rte_port_mingw with a custom command 00:02:30.351 [498/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:30.351 [499/743] Generating lib/rte_pdump_def with a custom command 00:02:30.351 [500/743] Generating lib/rte_pdump_mingw with a custom command 00:02:30.351 [501/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:30.351 [502/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:30.351 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:30.610 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:30.610 [505/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:30.610 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:30.610 [507/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:30.610 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:30.610 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:30.610 [510/743] Linking static target lib/librte_port.a 00:02:31.179 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:31.179 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:31.179 [513/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.179 [514/743] Linking target lib/librte_port.so.23.0 00:02:31.179 [515/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:31.179 [516/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:31.179 [517/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:31.438 [518/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:31.438 [519/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:31.438 [520/743] Linking static target lib/librte_pdump.a 00:02:31.696 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.696 [522/743] Linking target lib/librte_pdump.so.23.0 00:02:31.696 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:31.696 [524/743] Generating lib/rte_table_def with a custom command 00:02:31.697 [525/743] Generating lib/rte_table_mingw with a custom command 00:02:31.956 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:31.956 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:31.956 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:32.215 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:32.215 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:32.215 [531/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:32.215 [532/743] Generating lib/rte_pipeline_def with a custom command 00:02:32.215 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:02:32.474 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:32.474 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:32.474 [536/743] Linking static target lib/librte_table.a 00:02:32.474 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:32.732 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:32.991 [539/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:32.991 [540/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.991 [541/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:32.991 [542/743] Linking target lib/librte_table.so.23.0 00:02:32.991 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:32.991 [544/743] Generating lib/rte_graph_def with a custom command 00:02:32.991 [545/743] Generating lib/rte_graph_mingw with a custom command 00:02:33.250 [546/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:33.250 [547/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:33.508 [548/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:33.508 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:33.768 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:33.768 [551/743] Linking static target lib/librte_graph.a 00:02:33.768 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:34.028 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:34.028 [554/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:34.028 [555/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:34.287 [556/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:34.287 [557/743] Generating lib/rte_node_def with a custom command 00:02:34.287 [558/743] Generating lib/rte_node_mingw with a custom command 00:02:34.287 [559/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.287 [560/743] Linking target lib/librte_graph.so.23.0 00:02:34.287 [561/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:34.546 [562/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:34.546 [563/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:34.546 [564/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:34.546 [565/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:34.546 [566/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:34.546 [567/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:34.546 [568/743] Generating drivers/rte_bus_pci_def with a custom command 00:02:34.546 [569/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:34.806 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:34.806 [571/743] Generating drivers/rte_bus_vdev_def with a custom command 00:02:34.806 [572/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:34.806 [573/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:34.806 [574/743] Generating drivers/rte_mempool_ring_def with a custom command 00:02:34.806 [575/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:34.806 [576/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:34.806 [577/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:34.806 [578/743] Linking static target lib/librte_node.a 00:02:34.806 [579/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:34.806 [580/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:34.806 [581/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:35.064 [582/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.065 [583/743] Linking target lib/librte_node.so.23.0 00:02:35.065 [584/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:35.065 [585/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:35.065 [586/743] Linking static target drivers/librte_bus_vdev.a 00:02:35.065 [587/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:35.065 [588/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:35.324 [589/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.324 [590/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:35.324 [591/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:35.324 [592/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:35.324 [593/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:35.324 [594/743] Linking static target drivers/librte_bus_pci.a 00:02:35.324 [595/743] Linking target drivers/librte_bus_vdev.so.23.0 00:02:35.583 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:35.583 [597/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:35.583 [598/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.843 [599/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:35.843 [600/743] Linking target drivers/librte_bus_pci.so.23.0 00:02:35.843 [601/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:35.843 [602/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:35.843 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:35.843 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:36.102 [605/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:36.102 [606/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:36.102 [607/743] Linking static target drivers/librte_mempool_ring.a 00:02:36.102 [608/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:36.102 [609/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:36.102 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:02:36.362 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:36.930 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:36.930 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:36.930 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:37.189 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:37.448 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:37.448 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:37.707 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:37.707 [619/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:37.707 [620/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:37.966 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:38.225 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:02:38.225 [623/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:38.225 [624/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:38.225 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:39.162 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:39.162 [627/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:39.421 [628/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:39.421 [629/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:39.421 [630/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:39.421 [631/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:39.421 [632/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:39.421 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:39.421 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:39.680 [635/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:39.939 [636/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:40.199 [637/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:40.199 [638/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:40.199 [639/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:40.458 [640/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:40.458 [641/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:40.458 [642/743] Linking static target lib/librte_vhost.a 00:02:40.458 [643/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:40.458 [644/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:40.458 [645/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:40.717 [646/743] Linking static target drivers/librte_net_i40e.a 00:02:40.717 [647/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:40.717 [648/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:40.717 [649/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:40.717 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:41.284 [651/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:41.284 [652/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.284 [653/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:41.284 [654/743] Linking target drivers/librte_net_i40e.so.23.0 00:02:41.284 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:41.543 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:41.543 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:41.543 [658/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.802 [659/743] Linking target lib/librte_vhost.so.23.0 00:02:41.802 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:42.061 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:42.061 [662/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:42.061 [663/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:42.061 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:42.061 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:42.061 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:42.320 [667/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:42.320 [668/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:42.320 [669/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:42.578 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:42.837 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:42.837 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:43.096 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:43.355 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:43.614 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:43.614 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:43.614 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:43.874 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:43.874 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:43.874 [680/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:43.874 [681/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:44.132 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:44.132 [683/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:44.392 [684/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:44.392 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:44.392 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:44.651 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:44.651 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:44.909 [689/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:44.909 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:44.909 [691/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:44.909 [692/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:44.909 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:44.909 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:45.477 [695/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:45.477 [696/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:45.477 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:45.735 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:45.735 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:46.303 [700/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:46.303 [701/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:46.303 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:46.562 [703/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:46.562 [704/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:46.562 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:46.562 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:46.562 [707/743] Linking static target lib/librte_pipeline.a 00:02:46.820 [708/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:47.080 [709/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:47.080 [710/743] Linking target app/dpdk-dumpcap 00:02:47.339 [711/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:47.339 [712/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:47.339 [713/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:47.339 [714/743] Linking target app/dpdk-pdump 00:02:47.597 [715/743] Linking target app/dpdk-proc-info 00:02:47.597 [716/743] Linking target app/dpdk-test-acl 00:02:47.856 [717/743] Linking target app/dpdk-test-bbdev 00:02:47.856 [718/743] Linking target app/dpdk-test-cmdline 00:02:47.856 [719/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:47.856 [720/743] Linking target app/dpdk-test-compress-perf 00:02:47.856 [721/743] Linking target app/dpdk-test-crypto-perf 00:02:47.856 [722/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:48.115 [723/743] Linking target app/dpdk-test-eventdev 00:02:48.115 [724/743] Linking target app/dpdk-test-fib 00:02:48.115 [725/743] Linking target app/dpdk-test-flow-perf 00:02:48.115 [726/743] Linking target app/dpdk-test-gpudev 00:02:48.374 [727/743] Linking target app/dpdk-test-pipeline 00:02:48.633 [728/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:48.633 [729/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:48.633 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:48.633 [731/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:48.893 [732/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.893 [733/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:48.893 [734/743] Linking target lib/librte_pipeline.so.23.0 00:02:49.152 [735/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:49.152 [736/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:49.152 [737/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:49.411 [738/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:49.411 [739/743] Linking target app/dpdk-test-sad 00:02:49.411 [740/743] Linking target app/dpdk-test-regex 00:02:49.670 [741/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:49.670 [742/743] Linking target app/dpdk-testpmd 00:02:49.928 [743/743] Linking target app/dpdk-test-security-perf 00:02:50.187 22:22:50 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:50.187 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:50.187 [0/1] Installing files. 00:02:50.450 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.451 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.452 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:50.453 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:50.454 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:50.454 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.454 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.454 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.454 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.454 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.454 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.454 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.454 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.454 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.454 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.454 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.454 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.454 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.454 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.454 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.714 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:50.715 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:50.715 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:50.715 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.715 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:50.715 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.715 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.715 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.715 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.715 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.715 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.715 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.715 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.715 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.977 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.977 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.977 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.977 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.977 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.977 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.977 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.977 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.977 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.978 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.979 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:50.980 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:50.980 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:02:50.980 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:50.980 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:02:50.980 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:50.980 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:02:50.980 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:50.980 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:02:50.980 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:50.980 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:02:50.980 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:50.980 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:02:50.980 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:50.980 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:02:50.980 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:50.980 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:02:50.980 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:50.980 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:02:50.980 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:50.980 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:02:50.980 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:50.980 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:02:50.980 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:50.980 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:02:50.980 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:50.980 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:02:50.980 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:50.980 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:02:50.980 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:50.980 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:02:50.980 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:50.980 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:02:50.980 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:50.980 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:02:50.980 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:50.980 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:02:50.980 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:50.980 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:02:50.980 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:50.980 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:02:50.980 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:50.980 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:02:50.980 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:50.980 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:02:50.980 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:50.980 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:02:50.980 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:50.980 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:02:50.980 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:50.980 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:02:50.980 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:50.980 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:02:50.980 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:50.980 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:50.980 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:50.980 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:50.980 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:50.980 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:50.980 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:50.980 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:50.980 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:50.980 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:50.980 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:50.980 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:50.980 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:50.980 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:02:50.980 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:50.980 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:02:50.980 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:50.980 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:02:50.981 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:50.981 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:02:50.981 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:50.981 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:02:50.981 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:50.981 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:02:50.981 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:50.981 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:02:50.981 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:50.981 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:02:50.981 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:50.981 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:02:50.981 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:50.981 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:02:50.981 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:50.981 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:02:50.981 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:50.981 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:02:50.981 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:50.981 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:02:50.981 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:50.981 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:02:50.981 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:50.981 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:02:50.981 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:50.981 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:02:50.981 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:50.981 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:02:50.981 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:50.981 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:02:50.981 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:50.981 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:02:50.981 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:50.981 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:02:50.981 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:50.981 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:02:50.981 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:50.981 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:02:50.981 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:50.981 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:02:50.981 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:50.981 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:02:50.981 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:50.981 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:02:50.981 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:50.981 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:02:50.981 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:50.981 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:50.981 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:50.981 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:50.981 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:50.981 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:50.981 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:50.981 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:50.981 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:50.981 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:50.981 22:22:51 -- common/autobuild_common.sh@192 -- $ uname -s 00:02:50.981 22:22:51 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:50.981 22:22:51 -- common/autobuild_common.sh@203 -- $ cat 00:02:50.981 22:22:51 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:50.981 00:02:50.981 real 0m46.668s 00:02:50.981 user 5m20.870s 00:02:50.981 sys 0m56.914s 00:02:50.981 22:22:51 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:50.981 22:22:51 -- common/autotest_common.sh@10 -- $ set +x 00:02:50.981 ************************************ 00:02:50.981 END TEST build_native_dpdk 00:02:50.981 ************************************ 00:02:50.981 22:22:51 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:50.981 22:22:51 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:50.981 22:22:51 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:50.981 22:22:51 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:50.981 22:22:51 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:50.981 22:22:51 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:50.981 22:22:51 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:50.981 22:22:51 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:02:51.241 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:51.241 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.241 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:51.241 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:51.809 Using 'verbs' RDMA provider 00:03:07.340 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:19.546 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:19.546 go version go1.21.1 linux/amd64 00:03:20.114 Creating mk/config.mk...done. 00:03:20.114 Creating mk/cc.flags.mk...done. 00:03:20.114 Type 'make' to build. 00:03:20.114 22:23:20 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:20.114 22:23:20 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:20.114 22:23:20 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:20.114 22:23:20 -- common/autotest_common.sh@10 -- $ set +x 00:03:20.114 ************************************ 00:03:20.114 START TEST make 00:03:20.114 ************************************ 00:03:20.114 22:23:20 -- common/autotest_common.sh@1114 -- $ make -j10 00:03:20.373 make[1]: Nothing to be done for 'all'. 00:03:42.309 CC lib/ut/ut.o 00:03:42.309 CC lib/log/log.o 00:03:42.309 CC lib/ut_mock/mock.o 00:03:42.309 CC lib/log/log_deprecated.o 00:03:42.309 CC lib/log/log_flags.o 00:03:42.309 LIB libspdk_ut_mock.a 00:03:42.309 LIB libspdk_log.a 00:03:42.309 LIB libspdk_ut.a 00:03:42.309 SO libspdk_ut_mock.so.5.0 00:03:42.309 SO libspdk_ut.so.1.0 00:03:42.309 SO libspdk_log.so.6.1 00:03:42.309 SYMLINK libspdk_ut_mock.so 00:03:42.309 SYMLINK libspdk_ut.so 00:03:42.309 SYMLINK libspdk_log.so 00:03:42.309 CC lib/util/base64.o 00:03:42.309 CC lib/ioat/ioat.o 00:03:42.309 CC lib/util/bit_array.o 00:03:42.309 CC lib/util/cpuset.o 00:03:42.310 CXX lib/trace_parser/trace.o 00:03:42.310 CC lib/util/crc16.o 00:03:42.310 CC lib/util/crc32.o 00:03:42.310 CC lib/util/crc32c.o 00:03:42.310 CC lib/dma/dma.o 00:03:42.310 CC lib/vfio_user/host/vfio_user_pci.o 00:03:42.310 CC lib/util/crc32_ieee.o 00:03:42.310 CC lib/vfio_user/host/vfio_user.o 00:03:42.310 CC lib/util/crc64.o 00:03:42.310 CC lib/util/dif.o 00:03:42.310 LIB libspdk_dma.a 00:03:42.310 CC lib/util/fd.o 00:03:42.310 SO libspdk_dma.so.3.0 00:03:42.310 CC lib/util/file.o 00:03:42.310 CC lib/util/hexlify.o 00:03:42.310 LIB libspdk_ioat.a 00:03:42.310 CC lib/util/iov.o 00:03:42.310 SYMLINK libspdk_dma.so 00:03:42.310 CC lib/util/math.o 00:03:42.310 SO libspdk_ioat.so.6.0 00:03:42.310 LIB libspdk_vfio_user.a 00:03:42.310 CC lib/util/pipe.o 00:03:42.310 CC lib/util/strerror_tls.o 00:03:42.310 CC lib/util/string.o 00:03:42.310 SYMLINK libspdk_ioat.so 00:03:42.310 SO libspdk_vfio_user.so.4.0 00:03:42.310 CC lib/util/uuid.o 00:03:42.310 CC lib/util/fd_group.o 00:03:42.310 SYMLINK libspdk_vfio_user.so 00:03:42.310 CC lib/util/xor.o 00:03:42.310 CC lib/util/zipf.o 00:03:42.568 LIB libspdk_util.a 00:03:42.568 SO libspdk_util.so.8.0 00:03:42.827 SYMLINK libspdk_util.so 00:03:42.827 LIB libspdk_trace_parser.a 00:03:42.827 SO libspdk_trace_parser.so.4.0 00:03:42.827 CC lib/rdma/common.o 00:03:42.827 CC lib/rdma/rdma_verbs.o 00:03:42.827 CC lib/conf/conf.o 00:03:42.827 CC lib/vmd/vmd.o 00:03:42.827 CC lib/vmd/led.o 00:03:42.827 CC lib/idxd/idxd.o 00:03:42.827 CC lib/json/json_parse.o 00:03:42.827 CC lib/idxd/idxd_user.o 00:03:42.827 CC lib/env_dpdk/env.o 00:03:42.827 SYMLINK libspdk_trace_parser.so 00:03:42.827 CC lib/env_dpdk/memory.o 00:03:43.086 CC lib/env_dpdk/pci.o 00:03:43.086 CC lib/env_dpdk/init.o 00:03:43.086 LIB libspdk_conf.a 00:03:43.086 CC lib/json/json_util.o 00:03:43.086 CC lib/json/json_write.o 00:03:43.086 SO libspdk_conf.so.5.0 00:03:43.086 LIB libspdk_rdma.a 00:03:43.086 SYMLINK libspdk_conf.so 00:03:43.086 CC lib/idxd/idxd_kernel.o 00:03:43.086 SO libspdk_rdma.so.5.0 00:03:43.345 SYMLINK libspdk_rdma.so 00:03:43.345 CC lib/env_dpdk/threads.o 00:03:43.345 CC lib/env_dpdk/pci_ioat.o 00:03:43.345 CC lib/env_dpdk/pci_virtio.o 00:03:43.345 CC lib/env_dpdk/pci_vmd.o 00:03:43.345 CC lib/env_dpdk/pci_idxd.o 00:03:43.345 LIB libspdk_json.a 00:03:43.345 CC lib/env_dpdk/pci_event.o 00:03:43.345 LIB libspdk_idxd.a 00:03:43.345 CC lib/env_dpdk/sigbus_handler.o 00:03:43.345 SO libspdk_json.so.5.1 00:03:43.345 SO libspdk_idxd.so.11.0 00:03:43.345 CC lib/env_dpdk/pci_dpdk.o 00:03:43.345 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:43.345 SYMLINK libspdk_json.so 00:03:43.345 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:43.345 LIB libspdk_vmd.a 00:03:43.345 SYMLINK libspdk_idxd.so 00:03:43.604 SO libspdk_vmd.so.5.0 00:03:43.604 SYMLINK libspdk_vmd.so 00:03:43.604 CC lib/jsonrpc/jsonrpc_server.o 00:03:43.604 CC lib/jsonrpc/jsonrpc_client.o 00:03:43.604 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:43.604 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:43.864 LIB libspdk_jsonrpc.a 00:03:43.864 SO libspdk_jsonrpc.so.5.1 00:03:43.864 SYMLINK libspdk_jsonrpc.so 00:03:44.123 CC lib/rpc/rpc.o 00:03:44.123 LIB libspdk_env_dpdk.a 00:03:44.123 SO libspdk_env_dpdk.so.13.0 00:03:44.382 LIB libspdk_rpc.a 00:03:44.382 SO libspdk_rpc.so.5.0 00:03:44.382 SYMLINK libspdk_env_dpdk.so 00:03:44.382 SYMLINK libspdk_rpc.so 00:03:44.382 CC lib/notify/notify.o 00:03:44.382 CC lib/notify/notify_rpc.o 00:03:44.382 CC lib/trace/trace.o 00:03:44.382 CC lib/sock/sock.o 00:03:44.382 CC lib/sock/sock_rpc.o 00:03:44.382 CC lib/trace/trace_rpc.o 00:03:44.382 CC lib/trace/trace_flags.o 00:03:44.641 LIB libspdk_notify.a 00:03:44.641 SO libspdk_notify.so.5.0 00:03:44.641 LIB libspdk_trace.a 00:03:44.641 SYMLINK libspdk_notify.so 00:03:44.641 SO libspdk_trace.so.9.0 00:03:44.900 SYMLINK libspdk_trace.so 00:03:44.900 LIB libspdk_sock.a 00:03:44.900 SO libspdk_sock.so.8.0 00:03:44.900 SYMLINK libspdk_sock.so 00:03:44.900 CC lib/thread/thread.o 00:03:44.900 CC lib/thread/iobuf.o 00:03:45.159 CC lib/nvme/nvme_ctrlr.o 00:03:45.159 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:45.159 CC lib/nvme/nvme_fabric.o 00:03:45.159 CC lib/nvme/nvme_ns.o 00:03:45.159 CC lib/nvme/nvme_ns_cmd.o 00:03:45.159 CC lib/nvme/nvme_pcie_common.o 00:03:45.159 CC lib/nvme/nvme_qpair.o 00:03:45.159 CC lib/nvme/nvme_pcie.o 00:03:45.419 CC lib/nvme/nvme.o 00:03:45.678 CC lib/nvme/nvme_quirks.o 00:03:45.937 CC lib/nvme/nvme_transport.o 00:03:45.937 CC lib/nvme/nvme_discovery.o 00:03:45.937 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:45.937 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:45.937 CC lib/nvme/nvme_tcp.o 00:03:45.937 CC lib/nvme/nvme_opal.o 00:03:46.196 CC lib/nvme/nvme_io_msg.o 00:03:46.196 CC lib/nvme/nvme_poll_group.o 00:03:46.196 LIB libspdk_thread.a 00:03:46.455 SO libspdk_thread.so.9.0 00:03:46.455 SYMLINK libspdk_thread.so 00:03:46.455 CC lib/nvme/nvme_zns.o 00:03:46.455 CC lib/nvme/nvme_cuse.o 00:03:46.455 CC lib/nvme/nvme_vfio_user.o 00:03:46.455 CC lib/accel/accel.o 00:03:46.455 CC lib/nvme/nvme_rdma.o 00:03:46.714 CC lib/blob/blobstore.o 00:03:46.714 CC lib/blob/request.o 00:03:46.973 CC lib/blob/zeroes.o 00:03:46.973 CC lib/blob/blob_bs_dev.o 00:03:46.973 CC lib/accel/accel_rpc.o 00:03:46.973 CC lib/accel/accel_sw.o 00:03:46.973 CC lib/init/json_config.o 00:03:47.232 CC lib/virtio/virtio.o 00:03:47.232 CC lib/virtio/virtio_vhost_user.o 00:03:47.232 CC lib/virtio/virtio_vfio_user.o 00:03:47.232 CC lib/virtio/virtio_pci.o 00:03:47.232 CC lib/init/subsystem.o 00:03:47.232 CC lib/init/subsystem_rpc.o 00:03:47.491 LIB libspdk_accel.a 00:03:47.491 CC lib/init/rpc.o 00:03:47.491 SO libspdk_accel.so.14.0 00:03:47.491 SYMLINK libspdk_accel.so 00:03:47.491 LIB libspdk_virtio.a 00:03:47.491 LIB libspdk_init.a 00:03:47.491 SO libspdk_virtio.so.6.0 00:03:47.491 SO libspdk_init.so.4.0 00:03:47.750 SYMLINK libspdk_virtio.so 00:03:47.750 CC lib/bdev/bdev.o 00:03:47.750 CC lib/bdev/bdev_rpc.o 00:03:47.750 CC lib/bdev/bdev_zone.o 00:03:47.750 CC lib/bdev/part.o 00:03:47.750 CC lib/bdev/scsi_nvme.o 00:03:47.750 SYMLINK libspdk_init.so 00:03:47.750 LIB libspdk_nvme.a 00:03:47.750 CC lib/event/app.o 00:03:47.750 CC lib/event/reactor.o 00:03:47.750 CC lib/event/log_rpc.o 00:03:47.750 CC lib/event/app_rpc.o 00:03:47.750 CC lib/event/scheduler_static.o 00:03:48.009 SO libspdk_nvme.so.12.0 00:03:48.268 LIB libspdk_event.a 00:03:48.268 SYMLINK libspdk_nvme.so 00:03:48.268 SO libspdk_event.so.12.0 00:03:48.268 SYMLINK libspdk_event.so 00:03:49.204 LIB libspdk_blob.a 00:03:49.204 SO libspdk_blob.so.10.1 00:03:49.204 SYMLINK libspdk_blob.so 00:03:49.463 CC lib/lvol/lvol.o 00:03:49.463 CC lib/blobfs/blobfs.o 00:03:49.463 CC lib/blobfs/tree.o 00:03:49.722 LIB libspdk_bdev.a 00:03:49.980 SO libspdk_bdev.so.14.0 00:03:49.980 SYMLINK libspdk_bdev.so 00:03:50.239 CC lib/nvmf/ctrlr.o 00:03:50.239 CC lib/nvmf/ctrlr_discovery.o 00:03:50.239 CC lib/nvmf/ctrlr_bdev.o 00:03:50.239 CC lib/nvmf/subsystem.o 00:03:50.239 LIB libspdk_lvol.a 00:03:50.239 CC lib/ublk/ublk.o 00:03:50.239 CC lib/nbd/nbd.o 00:03:50.239 CC lib/scsi/dev.o 00:03:50.239 CC lib/ftl/ftl_core.o 00:03:50.239 SO libspdk_lvol.so.9.1 00:03:50.239 LIB libspdk_blobfs.a 00:03:50.239 SYMLINK libspdk_lvol.so 00:03:50.239 CC lib/nbd/nbd_rpc.o 00:03:50.239 SO libspdk_blobfs.so.9.0 00:03:50.239 SYMLINK libspdk_blobfs.so 00:03:50.240 CC lib/ftl/ftl_init.o 00:03:50.499 CC lib/scsi/lun.o 00:03:50.499 CC lib/scsi/port.o 00:03:50.499 CC lib/ftl/ftl_layout.o 00:03:50.499 LIB libspdk_nbd.a 00:03:50.499 CC lib/ftl/ftl_debug.o 00:03:50.499 CC lib/ftl/ftl_io.o 00:03:50.499 CC lib/scsi/scsi.o 00:03:50.499 SO libspdk_nbd.so.6.0 00:03:50.499 SYMLINK libspdk_nbd.so 00:03:50.499 CC lib/ftl/ftl_sb.o 00:03:50.758 CC lib/ftl/ftl_l2p.o 00:03:50.758 CC lib/ublk/ublk_rpc.o 00:03:50.758 CC lib/scsi/scsi_bdev.o 00:03:50.758 CC lib/scsi/scsi_pr.o 00:03:50.758 CC lib/nvmf/nvmf.o 00:03:50.758 CC lib/nvmf/nvmf_rpc.o 00:03:50.758 CC lib/nvmf/transport.o 00:03:50.758 CC lib/ftl/ftl_l2p_flat.o 00:03:50.758 LIB libspdk_ublk.a 00:03:50.758 SO libspdk_ublk.so.2.0 00:03:51.017 CC lib/ftl/ftl_nv_cache.o 00:03:51.018 SYMLINK libspdk_ublk.so 00:03:51.018 CC lib/ftl/ftl_band.o 00:03:51.018 CC lib/ftl/ftl_band_ops.o 00:03:51.018 CC lib/scsi/scsi_rpc.o 00:03:51.277 CC lib/ftl/ftl_writer.o 00:03:51.277 CC lib/scsi/task.o 00:03:51.277 CC lib/ftl/ftl_rq.o 00:03:51.277 CC lib/ftl/ftl_reloc.o 00:03:51.277 CC lib/ftl/ftl_l2p_cache.o 00:03:51.277 CC lib/nvmf/tcp.o 00:03:51.277 LIB libspdk_scsi.a 00:03:51.536 CC lib/nvmf/rdma.o 00:03:51.536 SO libspdk_scsi.so.8.0 00:03:51.536 CC lib/ftl/ftl_p2l.o 00:03:51.536 CC lib/ftl/mngt/ftl_mngt.o 00:03:51.536 SYMLINK libspdk_scsi.so 00:03:51.536 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:51.536 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:51.536 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:51.536 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:51.795 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:51.795 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:51.795 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:51.795 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:51.795 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:51.795 CC lib/iscsi/conn.o 00:03:52.054 CC lib/vhost/vhost.o 00:03:52.054 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:52.054 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:52.054 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:52.054 CC lib/ftl/utils/ftl_conf.o 00:03:52.054 CC lib/iscsi/init_grp.o 00:03:52.054 CC lib/iscsi/iscsi.o 00:03:52.054 CC lib/iscsi/md5.o 00:03:52.054 CC lib/iscsi/param.o 00:03:52.054 CC lib/iscsi/portal_grp.o 00:03:52.313 CC lib/iscsi/tgt_node.o 00:03:52.313 CC lib/vhost/vhost_rpc.o 00:03:52.313 CC lib/ftl/utils/ftl_md.o 00:03:52.313 CC lib/ftl/utils/ftl_mempool.o 00:03:52.573 CC lib/ftl/utils/ftl_bitmap.o 00:03:52.573 CC lib/ftl/utils/ftl_property.o 00:03:52.573 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:52.573 CC lib/vhost/vhost_scsi.o 00:03:52.573 CC lib/vhost/vhost_blk.o 00:03:52.573 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:52.573 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:52.832 CC lib/vhost/rte_vhost_user.o 00:03:52.832 CC lib/iscsi/iscsi_subsystem.o 00:03:52.832 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:52.832 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:52.832 CC lib/iscsi/iscsi_rpc.o 00:03:52.832 CC lib/iscsi/task.o 00:03:52.832 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:53.091 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:53.091 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:53.091 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:53.091 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:53.091 CC lib/ftl/base/ftl_base_dev.o 00:03:53.091 CC lib/ftl/base/ftl_base_bdev.o 00:03:53.350 LIB libspdk_nvmf.a 00:03:53.350 CC lib/ftl/ftl_trace.o 00:03:53.350 LIB libspdk_iscsi.a 00:03:53.350 SO libspdk_nvmf.so.17.0 00:03:53.350 SO libspdk_iscsi.so.7.0 00:03:53.350 LIB libspdk_ftl.a 00:03:53.609 SYMLINK libspdk_nvmf.so 00:03:53.609 SYMLINK libspdk_iscsi.so 00:03:53.609 SO libspdk_ftl.so.8.0 00:03:53.609 LIB libspdk_vhost.a 00:03:53.868 SO libspdk_vhost.so.7.1 00:03:53.868 SYMLINK libspdk_vhost.so 00:03:53.868 SYMLINK libspdk_ftl.so 00:03:54.126 CC module/env_dpdk/env_dpdk_rpc.o 00:03:54.126 CC module/accel/ioat/accel_ioat.o 00:03:54.126 CC module/accel/error/accel_error.o 00:03:54.126 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:54.126 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:54.126 CC module/accel/iaa/accel_iaa.o 00:03:54.126 CC module/sock/posix/posix.o 00:03:54.126 CC module/blob/bdev/blob_bdev.o 00:03:54.126 CC module/accel/dsa/accel_dsa.o 00:03:54.126 CC module/scheduler/gscheduler/gscheduler.o 00:03:54.126 LIB libspdk_env_dpdk_rpc.a 00:03:54.126 SO libspdk_env_dpdk_rpc.so.5.0 00:03:54.386 LIB libspdk_scheduler_dpdk_governor.a 00:03:54.386 SYMLINK libspdk_env_dpdk_rpc.so 00:03:54.386 CC module/accel/iaa/accel_iaa_rpc.o 00:03:54.386 LIB libspdk_scheduler_gscheduler.a 00:03:54.386 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:54.386 CC module/accel/error/accel_error_rpc.o 00:03:54.386 CC module/accel/ioat/accel_ioat_rpc.o 00:03:54.386 SO libspdk_scheduler_gscheduler.so.3.0 00:03:54.386 LIB libspdk_scheduler_dynamic.a 00:03:54.386 SO libspdk_scheduler_dynamic.so.3.0 00:03:54.386 CC module/accel/dsa/accel_dsa_rpc.o 00:03:54.386 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:54.386 SYMLINK libspdk_scheduler_gscheduler.so 00:03:54.386 LIB libspdk_blob_bdev.a 00:03:54.386 SYMLINK libspdk_scheduler_dynamic.so 00:03:54.386 SO libspdk_blob_bdev.so.10.1 00:03:54.386 LIB libspdk_accel_iaa.a 00:03:54.386 LIB libspdk_accel_ioat.a 00:03:54.386 LIB libspdk_accel_error.a 00:03:54.386 SO libspdk_accel_iaa.so.2.0 00:03:54.386 SO libspdk_accel_ioat.so.5.0 00:03:54.386 SYMLINK libspdk_blob_bdev.so 00:03:54.386 SO libspdk_accel_error.so.1.0 00:03:54.386 LIB libspdk_accel_dsa.a 00:03:54.386 SYMLINK libspdk_accel_iaa.so 00:03:54.645 SYMLINK libspdk_accel_ioat.so 00:03:54.645 SO libspdk_accel_dsa.so.4.0 00:03:54.645 SYMLINK libspdk_accel_error.so 00:03:54.645 SYMLINK libspdk_accel_dsa.so 00:03:54.645 CC module/blobfs/bdev/blobfs_bdev.o 00:03:54.645 CC module/bdev/lvol/vbdev_lvol.o 00:03:54.645 CC module/bdev/nvme/bdev_nvme.o 00:03:54.645 CC module/bdev/null/bdev_null.o 00:03:54.645 CC module/bdev/delay/vbdev_delay.o 00:03:54.645 CC module/bdev/gpt/gpt.o 00:03:54.645 CC module/bdev/error/vbdev_error.o 00:03:54.645 CC module/bdev/malloc/bdev_malloc.o 00:03:54.645 CC module/bdev/passthru/vbdev_passthru.o 00:03:54.904 LIB libspdk_sock_posix.a 00:03:54.904 SO libspdk_sock_posix.so.5.0 00:03:54.904 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:54.904 CC module/bdev/gpt/vbdev_gpt.o 00:03:54.904 SYMLINK libspdk_sock_posix.so 00:03:54.904 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:54.904 CC module/bdev/null/bdev_null_rpc.o 00:03:54.904 CC module/bdev/error/vbdev_error_rpc.o 00:03:54.904 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:54.904 LIB libspdk_blobfs_bdev.a 00:03:54.904 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:54.904 SO libspdk_blobfs_bdev.so.5.0 00:03:55.171 LIB libspdk_bdev_malloc.a 00:03:55.171 LIB libspdk_bdev_null.a 00:03:55.171 SYMLINK libspdk_blobfs_bdev.so 00:03:55.171 SO libspdk_bdev_malloc.so.5.0 00:03:55.171 LIB libspdk_bdev_error.a 00:03:55.171 SO libspdk_bdev_null.so.5.0 00:03:55.171 LIB libspdk_bdev_gpt.a 00:03:55.171 CC module/bdev/raid/bdev_raid.o 00:03:55.171 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:55.171 SO libspdk_bdev_error.so.5.0 00:03:55.171 SYMLINK libspdk_bdev_malloc.so 00:03:55.171 LIB libspdk_bdev_passthru.a 00:03:55.171 SO libspdk_bdev_gpt.so.5.0 00:03:55.171 SYMLINK libspdk_bdev_null.so 00:03:55.171 SO libspdk_bdev_passthru.so.5.0 00:03:55.171 LIB libspdk_bdev_delay.a 00:03:55.171 SYMLINK libspdk_bdev_error.so 00:03:55.171 SO libspdk_bdev_delay.so.5.0 00:03:55.171 SYMLINK libspdk_bdev_gpt.so 00:03:55.171 CC module/bdev/raid/bdev_raid_rpc.o 00:03:55.171 CC module/bdev/raid/bdev_raid_sb.o 00:03:55.171 CC module/bdev/split/vbdev_split.o 00:03:55.171 SYMLINK libspdk_bdev_passthru.so 00:03:55.171 CC module/bdev/raid/raid0.o 00:03:55.171 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:55.171 SYMLINK libspdk_bdev_delay.so 00:03:55.171 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:55.171 CC module/bdev/aio/bdev_aio.o 00:03:55.488 LIB libspdk_bdev_lvol.a 00:03:55.488 CC module/bdev/raid/raid1.o 00:03:55.488 CC module/bdev/raid/concat.o 00:03:55.488 CC module/bdev/split/vbdev_split_rpc.o 00:03:55.488 SO libspdk_bdev_lvol.so.5.0 00:03:55.488 CC module/bdev/aio/bdev_aio_rpc.o 00:03:55.488 CC module/bdev/ftl/bdev_ftl.o 00:03:55.488 SYMLINK libspdk_bdev_lvol.so 00:03:55.488 LIB libspdk_bdev_zone_block.a 00:03:55.488 SO libspdk_bdev_zone_block.so.5.0 00:03:55.488 LIB libspdk_bdev_split.a 00:03:55.773 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:55.773 LIB libspdk_bdev_aio.a 00:03:55.773 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:55.773 SO libspdk_bdev_split.so.5.0 00:03:55.773 SYMLINK libspdk_bdev_zone_block.so 00:03:55.773 CC module/bdev/iscsi/bdev_iscsi.o 00:03:55.773 CC module/bdev/nvme/nvme_rpc.o 00:03:55.773 SO libspdk_bdev_aio.so.5.0 00:03:55.773 SYMLINK libspdk_bdev_split.so 00:03:55.773 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:55.773 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:55.773 SYMLINK libspdk_bdev_aio.so 00:03:55.773 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:55.773 CC module/bdev/nvme/bdev_mdns_client.o 00:03:55.773 LIB libspdk_bdev_ftl.a 00:03:55.773 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:55.773 SO libspdk_bdev_ftl.so.5.0 00:03:55.773 CC module/bdev/nvme/vbdev_opal.o 00:03:55.773 LIB libspdk_bdev_raid.a 00:03:56.037 SYMLINK libspdk_bdev_ftl.so 00:03:56.037 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:56.037 SO libspdk_bdev_raid.so.5.0 00:03:56.037 LIB libspdk_bdev_iscsi.a 00:03:56.037 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:56.037 SO libspdk_bdev_iscsi.so.5.0 00:03:56.037 SYMLINK libspdk_bdev_raid.so 00:03:56.037 SYMLINK libspdk_bdev_iscsi.so 00:03:56.296 LIB libspdk_bdev_virtio.a 00:03:56.296 SO libspdk_bdev_virtio.so.5.0 00:03:56.296 SYMLINK libspdk_bdev_virtio.so 00:03:56.555 LIB libspdk_bdev_nvme.a 00:03:56.556 SO libspdk_bdev_nvme.so.6.0 00:03:56.815 SYMLINK libspdk_bdev_nvme.so 00:03:57.074 CC module/event/subsystems/vmd/vmd.o 00:03:57.074 CC module/event/subsystems/iobuf/iobuf.o 00:03:57.074 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:57.074 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:57.074 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:57.074 CC module/event/subsystems/scheduler/scheduler.o 00:03:57.074 CC module/event/subsystems/sock/sock.o 00:03:57.333 LIB libspdk_event_scheduler.a 00:03:57.333 LIB libspdk_event_vhost_blk.a 00:03:57.333 LIB libspdk_event_vmd.a 00:03:57.333 SO libspdk_event_vhost_blk.so.2.0 00:03:57.333 SO libspdk_event_scheduler.so.3.0 00:03:57.333 SO libspdk_event_vmd.so.5.0 00:03:57.333 LIB libspdk_event_sock.a 00:03:57.333 LIB libspdk_event_iobuf.a 00:03:57.333 SYMLINK libspdk_event_vhost_blk.so 00:03:57.333 SYMLINK libspdk_event_scheduler.so 00:03:57.333 SO libspdk_event_sock.so.4.0 00:03:57.333 SO libspdk_event_iobuf.so.2.0 00:03:57.333 SYMLINK libspdk_event_vmd.so 00:03:57.333 SYMLINK libspdk_event_iobuf.so 00:03:57.333 SYMLINK libspdk_event_sock.so 00:03:57.591 CC module/event/subsystems/accel/accel.o 00:03:57.591 LIB libspdk_event_accel.a 00:03:57.850 SO libspdk_event_accel.so.5.0 00:03:57.850 SYMLINK libspdk_event_accel.so 00:03:58.108 CC module/event/subsystems/bdev/bdev.o 00:03:58.108 LIB libspdk_event_bdev.a 00:03:58.367 SO libspdk_event_bdev.so.5.0 00:03:58.367 SYMLINK libspdk_event_bdev.so 00:03:58.367 CC module/event/subsystems/scsi/scsi.o 00:03:58.367 CC module/event/subsystems/nbd/nbd.o 00:03:58.367 CC module/event/subsystems/ublk/ublk.o 00:03:58.367 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:58.367 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:58.624 LIB libspdk_event_nbd.a 00:03:58.624 LIB libspdk_event_ublk.a 00:03:58.624 LIB libspdk_event_scsi.a 00:03:58.624 SO libspdk_event_ublk.so.2.0 00:03:58.624 SO libspdk_event_nbd.so.5.0 00:03:58.624 SO libspdk_event_scsi.so.5.0 00:03:58.624 SYMLINK libspdk_event_ublk.so 00:03:58.624 SYMLINK libspdk_event_nbd.so 00:03:58.624 SYMLINK libspdk_event_scsi.so 00:03:58.624 LIB libspdk_event_nvmf.a 00:03:58.883 SO libspdk_event_nvmf.so.5.0 00:03:58.883 SYMLINK libspdk_event_nvmf.so 00:03:58.883 CC module/event/subsystems/iscsi/iscsi.o 00:03:58.883 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:59.142 LIB libspdk_event_vhost_scsi.a 00:03:59.142 LIB libspdk_event_iscsi.a 00:03:59.142 SO libspdk_event_vhost_scsi.so.2.0 00:03:59.142 SO libspdk_event_iscsi.so.5.0 00:03:59.142 SYMLINK libspdk_event_vhost_scsi.so 00:03:59.142 SYMLINK libspdk_event_iscsi.so 00:03:59.400 SO libspdk.so.5.0 00:03:59.400 SYMLINK libspdk.so 00:03:59.400 CXX app/trace/trace.o 00:03:59.659 CC examples/accel/perf/accel_perf.o 00:03:59.659 CC examples/ioat/perf/perf.o 00:03:59.659 CC examples/sock/hello_world/hello_sock.o 00:03:59.659 CC examples/nvme/hello_world/hello_world.o 00:03:59.659 CC examples/vmd/lsvmd/lsvmd.o 00:03:59.659 CC examples/bdev/hello_world/hello_bdev.o 00:03:59.659 CC examples/blob/hello_world/hello_blob.o 00:03:59.659 CC test/accel/dif/dif.o 00:03:59.659 CC examples/nvmf/nvmf/nvmf.o 00:03:59.659 LINK lsvmd 00:03:59.917 LINK hello_bdev 00:03:59.917 LINK hello_world 00:03:59.917 LINK ioat_perf 00:03:59.917 LINK hello_sock 00:03:59.917 LINK hello_blob 00:03:59.917 LINK spdk_trace 00:03:59.917 LINK nvmf 00:03:59.917 LINK accel_perf 00:03:59.917 CC examples/vmd/led/led.o 00:03:59.917 LINK dif 00:03:59.917 CC examples/nvme/reconnect/reconnect.o 00:03:59.917 CC examples/ioat/verify/verify.o 00:04:00.176 CC examples/bdev/bdevperf/bdevperf.o 00:04:00.176 CC examples/util/zipf/zipf.o 00:04:00.176 CC examples/blob/cli/blobcli.o 00:04:00.176 LINK led 00:04:00.176 CC app/trace_record/trace_record.o 00:04:00.176 LINK verify 00:04:00.176 LINK zipf 00:04:00.176 CC app/nvmf_tgt/nvmf_main.o 00:04:00.176 CC test/bdev/bdevio/bdevio.o 00:04:00.176 CC test/app/bdev_svc/bdev_svc.o 00:04:00.434 LINK reconnect 00:04:00.434 CC app/iscsi_tgt/iscsi_tgt.o 00:04:00.434 LINK spdk_trace_record 00:04:00.434 LINK nvmf_tgt 00:04:00.434 TEST_HEADER include/spdk/accel.h 00:04:00.434 LINK bdev_svc 00:04:00.434 TEST_HEADER include/spdk/accel_module.h 00:04:00.434 TEST_HEADER include/spdk/assert.h 00:04:00.434 TEST_HEADER include/spdk/barrier.h 00:04:00.434 TEST_HEADER include/spdk/base64.h 00:04:00.434 TEST_HEADER include/spdk/bdev.h 00:04:00.434 TEST_HEADER include/spdk/bdev_module.h 00:04:00.434 TEST_HEADER include/spdk/bdev_zone.h 00:04:00.434 TEST_HEADER include/spdk/bit_array.h 00:04:00.434 TEST_HEADER include/spdk/bit_pool.h 00:04:00.434 TEST_HEADER include/spdk/blob_bdev.h 00:04:00.434 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:00.434 TEST_HEADER include/spdk/blobfs.h 00:04:00.434 TEST_HEADER include/spdk/blob.h 00:04:00.434 TEST_HEADER include/spdk/conf.h 00:04:00.434 TEST_HEADER include/spdk/config.h 00:04:00.434 TEST_HEADER include/spdk/cpuset.h 00:04:00.434 TEST_HEADER include/spdk/crc16.h 00:04:00.434 TEST_HEADER include/spdk/crc32.h 00:04:00.434 TEST_HEADER include/spdk/crc64.h 00:04:00.434 TEST_HEADER include/spdk/dif.h 00:04:00.434 TEST_HEADER include/spdk/dma.h 00:04:00.434 TEST_HEADER include/spdk/endian.h 00:04:00.434 TEST_HEADER include/spdk/env_dpdk.h 00:04:00.434 TEST_HEADER include/spdk/env.h 00:04:00.434 TEST_HEADER include/spdk/event.h 00:04:00.434 TEST_HEADER include/spdk/fd_group.h 00:04:00.434 TEST_HEADER include/spdk/fd.h 00:04:00.434 TEST_HEADER include/spdk/file.h 00:04:00.434 CC test/blobfs/mkfs/mkfs.o 00:04:00.434 TEST_HEADER include/spdk/ftl.h 00:04:00.434 TEST_HEADER include/spdk/gpt_spec.h 00:04:00.434 TEST_HEADER include/spdk/hexlify.h 00:04:00.434 TEST_HEADER include/spdk/histogram_data.h 00:04:00.434 TEST_HEADER include/spdk/idxd.h 00:04:00.434 TEST_HEADER include/spdk/idxd_spec.h 00:04:00.434 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:00.434 TEST_HEADER include/spdk/init.h 00:04:00.434 TEST_HEADER include/spdk/ioat.h 00:04:00.434 TEST_HEADER include/spdk/ioat_spec.h 00:04:00.434 TEST_HEADER include/spdk/iscsi_spec.h 00:04:00.434 TEST_HEADER include/spdk/json.h 00:04:00.434 TEST_HEADER include/spdk/jsonrpc.h 00:04:00.434 TEST_HEADER include/spdk/likely.h 00:04:00.434 LINK blobcli 00:04:00.434 TEST_HEADER include/spdk/log.h 00:04:00.434 TEST_HEADER include/spdk/lvol.h 00:04:00.715 TEST_HEADER include/spdk/memory.h 00:04:00.715 TEST_HEADER include/spdk/mmio.h 00:04:00.715 TEST_HEADER include/spdk/nbd.h 00:04:00.715 LINK iscsi_tgt 00:04:00.715 TEST_HEADER include/spdk/notify.h 00:04:00.715 TEST_HEADER include/spdk/nvme.h 00:04:00.715 TEST_HEADER include/spdk/nvme_intel.h 00:04:00.715 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:00.715 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:00.715 TEST_HEADER include/spdk/nvme_spec.h 00:04:00.715 TEST_HEADER include/spdk/nvme_zns.h 00:04:00.715 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:00.715 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:00.715 TEST_HEADER include/spdk/nvmf.h 00:04:00.715 TEST_HEADER include/spdk/nvmf_spec.h 00:04:00.715 TEST_HEADER include/spdk/nvmf_transport.h 00:04:00.715 TEST_HEADER include/spdk/opal.h 00:04:00.715 TEST_HEADER include/spdk/opal_spec.h 00:04:00.715 TEST_HEADER include/spdk/pci_ids.h 00:04:00.715 TEST_HEADER include/spdk/pipe.h 00:04:00.715 TEST_HEADER include/spdk/queue.h 00:04:00.715 TEST_HEADER include/spdk/reduce.h 00:04:00.715 TEST_HEADER include/spdk/rpc.h 00:04:00.715 TEST_HEADER include/spdk/scheduler.h 00:04:00.715 LINK bdevio 00:04:00.715 TEST_HEADER include/spdk/scsi.h 00:04:00.715 TEST_HEADER include/spdk/scsi_spec.h 00:04:00.715 TEST_HEADER include/spdk/sock.h 00:04:00.715 TEST_HEADER include/spdk/stdinc.h 00:04:00.715 TEST_HEADER include/spdk/string.h 00:04:00.715 TEST_HEADER include/spdk/thread.h 00:04:00.715 TEST_HEADER include/spdk/trace.h 00:04:00.715 TEST_HEADER include/spdk/trace_parser.h 00:04:00.715 TEST_HEADER include/spdk/tree.h 00:04:00.715 TEST_HEADER include/spdk/ublk.h 00:04:00.715 TEST_HEADER include/spdk/util.h 00:04:00.715 TEST_HEADER include/spdk/uuid.h 00:04:00.715 TEST_HEADER include/spdk/version.h 00:04:00.715 CC test/dma/test_dma/test_dma.o 00:04:00.715 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:00.715 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:00.715 TEST_HEADER include/spdk/vhost.h 00:04:00.715 TEST_HEADER include/spdk/vmd.h 00:04:00.715 TEST_HEADER include/spdk/xor.h 00:04:00.715 TEST_HEADER include/spdk/zipf.h 00:04:00.715 CXX test/cpp_headers/accel.o 00:04:00.715 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:00.715 LINK mkfs 00:04:00.715 CC app/spdk_tgt/spdk_tgt.o 00:04:00.715 CC examples/nvme/arbitration/arbitration.o 00:04:00.715 LINK bdevperf 00:04:00.715 CC examples/nvme/hotplug/hotplug.o 00:04:00.974 CC test/app/histogram_perf/histogram_perf.o 00:04:00.974 CXX test/cpp_headers/accel_module.o 00:04:00.974 LINK nvme_manage 00:04:00.974 LINK spdk_tgt 00:04:00.974 CC test/app/jsoncat/jsoncat.o 00:04:00.974 LINK histogram_perf 00:04:00.974 LINK test_dma 00:04:00.974 LINK hotplug 00:04:00.974 CXX test/cpp_headers/assert.o 00:04:00.974 CC test/app/stub/stub.o 00:04:01.233 LINK arbitration 00:04:01.233 LINK jsoncat 00:04:01.233 LINK nvme_fuzz 00:04:01.233 CXX test/cpp_headers/barrier.o 00:04:01.233 CC app/spdk_lspci/spdk_lspci.o 00:04:01.233 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:01.233 CC examples/thread/thread/thread_ex.o 00:04:01.233 LINK stub 00:04:01.233 CC examples/idxd/perf/perf.o 00:04:01.233 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:01.233 LINK spdk_lspci 00:04:01.233 CXX test/cpp_headers/base64.o 00:04:01.492 CC test/event/event_perf/event_perf.o 00:04:01.492 CC test/env/mem_callbacks/mem_callbacks.o 00:04:01.492 LINK cmb_copy 00:04:01.492 CC test/lvol/esnap/esnap.o 00:04:01.492 CC test/event/reactor/reactor.o 00:04:01.492 LINK thread 00:04:01.492 LINK event_perf 00:04:01.492 CXX test/cpp_headers/bdev.o 00:04:01.492 LINK mem_callbacks 00:04:01.492 CC app/spdk_nvme_perf/perf.o 00:04:01.492 LINK reactor 00:04:01.492 CC examples/nvme/abort/abort.o 00:04:01.751 LINK idxd_perf 00:04:01.751 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:01.751 CC test/env/vtophys/vtophys.o 00:04:01.751 CXX test/cpp_headers/bdev_module.o 00:04:01.751 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:01.751 CC test/event/reactor_perf/reactor_perf.o 00:04:01.751 CC test/event/app_repeat/app_repeat.o 00:04:01.751 LINK vtophys 00:04:01.751 LINK pmr_persistence 00:04:02.010 LINK reactor_perf 00:04:02.010 LINK env_dpdk_post_init 00:04:02.010 CXX test/cpp_headers/bdev_zone.o 00:04:02.010 LINK abort 00:04:02.010 LINK app_repeat 00:04:02.010 CC test/rpc_client/rpc_client_test.o 00:04:02.010 CXX test/cpp_headers/bit_array.o 00:04:02.010 CC test/env/memory/memory_ut.o 00:04:02.010 CC test/event/scheduler/scheduler.o 00:04:02.010 CC test/nvme/aer/aer.o 00:04:02.269 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:02.269 CXX test/cpp_headers/bit_pool.o 00:04:02.269 LINK rpc_client_test 00:04:02.269 LINK spdk_nvme_perf 00:04:02.269 LINK scheduler 00:04:02.527 LINK aer 00:04:02.527 LINK interrupt_tgt 00:04:02.527 CXX test/cpp_headers/blob_bdev.o 00:04:02.527 CXX test/cpp_headers/blobfs_bdev.o 00:04:02.527 CC app/spdk_nvme_identify/identify.o 00:04:02.527 LINK memory_ut 00:04:02.527 CC test/env/pci/pci_ut.o 00:04:02.527 CC test/nvme/reset/reset.o 00:04:02.527 CXX test/cpp_headers/blobfs.o 00:04:02.527 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:02.527 CC app/spdk_nvme_discover/discovery_aer.o 00:04:02.786 CXX test/cpp_headers/blob.o 00:04:02.786 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:02.786 CC app/spdk_top/spdk_top.o 00:04:02.786 LINK iscsi_fuzz 00:04:02.786 LINK spdk_nvme_discover 00:04:02.786 LINK reset 00:04:02.786 CXX test/cpp_headers/conf.o 00:04:03.045 LINK pci_ut 00:04:03.045 CXX test/cpp_headers/config.o 00:04:03.045 CXX test/cpp_headers/cpuset.o 00:04:03.045 CC test/nvme/sgl/sgl.o 00:04:03.045 CC test/nvme/e2edp/nvme_dp.o 00:04:03.045 CC test/nvme/overhead/overhead.o 00:04:03.045 LINK vhost_fuzz 00:04:03.304 CXX test/cpp_headers/crc16.o 00:04:03.304 LINK spdk_nvme_identify 00:04:03.304 LINK sgl 00:04:03.304 CXX test/cpp_headers/crc32.o 00:04:03.304 CC test/thread/poller_perf/poller_perf.o 00:04:03.563 LINK overhead 00:04:03.563 LINK nvme_dp 00:04:03.563 CC app/vhost/vhost.o 00:04:03.563 CXX test/cpp_headers/crc64.o 00:04:03.563 LINK poller_perf 00:04:03.563 CXX test/cpp_headers/dif.o 00:04:03.563 CXX test/cpp_headers/dma.o 00:04:03.563 LINK spdk_top 00:04:03.821 CC test/nvme/err_injection/err_injection.o 00:04:03.821 LINK vhost 00:04:03.821 CC app/spdk_dd/spdk_dd.o 00:04:03.821 CXX test/cpp_headers/endian.o 00:04:03.821 CC test/nvme/startup/startup.o 00:04:03.821 CXX test/cpp_headers/env_dpdk.o 00:04:03.821 CC app/fio/nvme/fio_plugin.o 00:04:03.821 CC app/fio/bdev/fio_plugin.o 00:04:03.821 LINK err_injection 00:04:03.821 CC test/nvme/reserve/reserve.o 00:04:04.080 CC test/nvme/simple_copy/simple_copy.o 00:04:04.080 CXX test/cpp_headers/env.o 00:04:04.080 LINK startup 00:04:04.080 CC test/nvme/connect_stress/connect_stress.o 00:04:04.080 LINK spdk_dd 00:04:04.080 CXX test/cpp_headers/event.o 00:04:04.080 LINK reserve 00:04:04.080 CC test/nvme/boot_partition/boot_partition.o 00:04:04.080 LINK simple_copy 00:04:04.338 LINK connect_stress 00:04:04.338 CXX test/cpp_headers/fd_group.o 00:04:04.338 LINK spdk_bdev 00:04:04.338 LINK spdk_nvme 00:04:04.338 CC test/nvme/compliance/nvme_compliance.o 00:04:04.338 LINK boot_partition 00:04:04.338 CC test/nvme/fused_ordering/fused_ordering.o 00:04:04.338 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:04.338 CXX test/cpp_headers/fd.o 00:04:04.338 CXX test/cpp_headers/file.o 00:04:04.597 CC test/nvme/fdp/fdp.o 00:04:04.597 CXX test/cpp_headers/ftl.o 00:04:04.597 CC test/nvme/cuse/cuse.o 00:04:04.597 LINK fused_ordering 00:04:04.597 LINK doorbell_aers 00:04:04.597 CXX test/cpp_headers/gpt_spec.o 00:04:04.597 LINK nvme_compliance 00:04:04.597 CXX test/cpp_headers/hexlify.o 00:04:04.597 CXX test/cpp_headers/histogram_data.o 00:04:04.856 CXX test/cpp_headers/idxd.o 00:04:04.856 CXX test/cpp_headers/idxd_spec.o 00:04:04.856 CXX test/cpp_headers/init.o 00:04:04.856 CXX test/cpp_headers/ioat.o 00:04:04.856 LINK fdp 00:04:04.856 CXX test/cpp_headers/ioat_spec.o 00:04:04.856 CXX test/cpp_headers/iscsi_spec.o 00:04:04.856 CXX test/cpp_headers/json.o 00:04:04.856 CXX test/cpp_headers/jsonrpc.o 00:04:04.856 CXX test/cpp_headers/likely.o 00:04:05.115 CXX test/cpp_headers/log.o 00:04:05.115 CXX test/cpp_headers/lvol.o 00:04:05.115 CXX test/cpp_headers/memory.o 00:04:05.115 CXX test/cpp_headers/mmio.o 00:04:05.115 CXX test/cpp_headers/nbd.o 00:04:05.115 CXX test/cpp_headers/notify.o 00:04:05.115 CXX test/cpp_headers/nvme.o 00:04:05.115 CXX test/cpp_headers/nvme_intel.o 00:04:05.115 CXX test/cpp_headers/nvme_ocssd.o 00:04:05.115 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:05.373 CXX test/cpp_headers/nvme_spec.o 00:04:05.373 CXX test/cpp_headers/nvme_zns.o 00:04:05.373 CXX test/cpp_headers/nvmf_cmd.o 00:04:05.373 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:05.373 CXX test/cpp_headers/nvmf.o 00:04:05.373 CXX test/cpp_headers/nvmf_spec.o 00:04:05.373 CXX test/cpp_headers/nvmf_transport.o 00:04:05.373 CXX test/cpp_headers/opal.o 00:04:05.373 CXX test/cpp_headers/opal_spec.o 00:04:05.632 CXX test/cpp_headers/pci_ids.o 00:04:05.632 LINK cuse 00:04:05.632 CXX test/cpp_headers/pipe.o 00:04:05.632 CXX test/cpp_headers/queue.o 00:04:05.632 CXX test/cpp_headers/reduce.o 00:04:05.632 CXX test/cpp_headers/rpc.o 00:04:05.632 CXX test/cpp_headers/scheduler.o 00:04:05.632 CXX test/cpp_headers/scsi.o 00:04:05.632 CXX test/cpp_headers/scsi_spec.o 00:04:05.632 CXX test/cpp_headers/sock.o 00:04:05.632 CXX test/cpp_headers/stdinc.o 00:04:05.632 CXX test/cpp_headers/string.o 00:04:05.632 CXX test/cpp_headers/thread.o 00:04:05.632 CXX test/cpp_headers/trace.o 00:04:05.890 CXX test/cpp_headers/trace_parser.o 00:04:05.891 CXX test/cpp_headers/tree.o 00:04:05.891 CXX test/cpp_headers/ublk.o 00:04:05.891 CXX test/cpp_headers/util.o 00:04:05.891 CXX test/cpp_headers/uuid.o 00:04:05.891 CXX test/cpp_headers/version.o 00:04:06.149 CXX test/cpp_headers/vfio_user_pci.o 00:04:06.149 CXX test/cpp_headers/vfio_user_spec.o 00:04:06.149 CXX test/cpp_headers/vhost.o 00:04:06.149 CXX test/cpp_headers/vmd.o 00:04:06.149 CXX test/cpp_headers/xor.o 00:04:06.149 LINK esnap 00:04:06.149 CXX test/cpp_headers/zipf.o 00:04:08.682 00:04:08.682 real 0m48.617s 00:04:08.682 user 4m35.306s 00:04:08.682 sys 1m3.836s 00:04:08.682 22:24:09 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:08.682 22:24:09 -- common/autotest_common.sh@10 -- $ set +x 00:04:08.682 ************************************ 00:04:08.682 END TEST make 00:04:08.682 ************************************ 00:04:08.682 22:24:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:08.682 22:24:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:08.682 22:24:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:08.943 22:24:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:08.943 22:24:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:08.943 22:24:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:08.943 22:24:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:08.943 22:24:09 -- scripts/common.sh@335 -- # IFS=.-: 00:04:08.943 22:24:09 -- scripts/common.sh@335 -- # read -ra ver1 00:04:08.943 22:24:09 -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.943 22:24:09 -- scripts/common.sh@336 -- # read -ra ver2 00:04:08.943 22:24:09 -- scripts/common.sh@337 -- # local 'op=<' 00:04:08.943 22:24:09 -- scripts/common.sh@339 -- # ver1_l=2 00:04:08.943 22:24:09 -- scripts/common.sh@340 -- # ver2_l=1 00:04:08.943 22:24:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:08.943 22:24:09 -- scripts/common.sh@343 -- # case "$op" in 00:04:08.943 22:24:09 -- scripts/common.sh@344 -- # : 1 00:04:08.943 22:24:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:08.943 22:24:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.943 22:24:09 -- scripts/common.sh@364 -- # decimal 1 00:04:08.943 22:24:09 -- scripts/common.sh@352 -- # local d=1 00:04:08.943 22:24:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.943 22:24:09 -- scripts/common.sh@354 -- # echo 1 00:04:08.943 22:24:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:08.943 22:24:09 -- scripts/common.sh@365 -- # decimal 2 00:04:08.943 22:24:09 -- scripts/common.sh@352 -- # local d=2 00:04:08.943 22:24:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.943 22:24:09 -- scripts/common.sh@354 -- # echo 2 00:04:08.943 22:24:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:08.943 22:24:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:08.943 22:24:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:08.943 22:24:09 -- scripts/common.sh@367 -- # return 0 00:04:08.943 22:24:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.943 22:24:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:08.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.943 --rc genhtml_branch_coverage=1 00:04:08.943 --rc genhtml_function_coverage=1 00:04:08.943 --rc genhtml_legend=1 00:04:08.943 --rc geninfo_all_blocks=1 00:04:08.943 --rc geninfo_unexecuted_blocks=1 00:04:08.943 00:04:08.943 ' 00:04:08.943 22:24:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:08.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.943 --rc genhtml_branch_coverage=1 00:04:08.943 --rc genhtml_function_coverage=1 00:04:08.943 --rc genhtml_legend=1 00:04:08.943 --rc geninfo_all_blocks=1 00:04:08.943 --rc geninfo_unexecuted_blocks=1 00:04:08.943 00:04:08.943 ' 00:04:08.943 22:24:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:08.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.943 --rc genhtml_branch_coverage=1 00:04:08.943 --rc genhtml_function_coverage=1 00:04:08.943 --rc genhtml_legend=1 00:04:08.943 --rc geninfo_all_blocks=1 00:04:08.943 --rc geninfo_unexecuted_blocks=1 00:04:08.943 00:04:08.943 ' 00:04:08.943 22:24:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:08.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.943 --rc genhtml_branch_coverage=1 00:04:08.943 --rc genhtml_function_coverage=1 00:04:08.943 --rc genhtml_legend=1 00:04:08.943 --rc geninfo_all_blocks=1 00:04:08.943 --rc geninfo_unexecuted_blocks=1 00:04:08.943 00:04:08.943 ' 00:04:08.943 22:24:09 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:08.943 22:24:09 -- nvmf/common.sh@7 -- # uname -s 00:04:08.943 22:24:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:08.943 22:24:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:08.943 22:24:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:08.943 22:24:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:08.943 22:24:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:08.943 22:24:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:08.943 22:24:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:08.943 22:24:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:08.943 22:24:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:08.943 22:24:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:08.943 22:24:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:04:08.943 22:24:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:04:08.943 22:24:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:08.943 22:24:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:08.943 22:24:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:08.943 22:24:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:08.943 22:24:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:08.943 22:24:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:08.943 22:24:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:08.943 22:24:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.943 22:24:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.943 22:24:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.943 22:24:09 -- paths/export.sh@5 -- # export PATH 00:04:08.943 22:24:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.943 22:24:09 -- nvmf/common.sh@46 -- # : 0 00:04:08.943 22:24:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:08.943 22:24:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:08.943 22:24:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:08.943 22:24:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:08.943 22:24:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:08.943 22:24:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:08.943 22:24:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:08.943 22:24:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:08.943 22:24:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:08.943 22:24:09 -- spdk/autotest.sh@32 -- # uname -s 00:04:08.943 22:24:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:08.943 22:24:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:08.943 22:24:09 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:08.943 22:24:09 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:08.943 22:24:09 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:08.943 22:24:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:08.943 22:24:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:08.943 22:24:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:08.943 22:24:09 -- spdk/autotest.sh@48 -- # udevadm_pid=61521 00:04:08.943 22:24:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:08.944 22:24:09 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:08.944 22:24:09 -- spdk/autotest.sh@54 -- # echo 61523 00:04:08.944 22:24:09 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:08.944 22:24:09 -- spdk/autotest.sh@56 -- # echo 61524 00:04:08.944 22:24:09 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:08.944 22:24:09 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:08.944 22:24:09 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:08.944 22:24:09 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:08.944 22:24:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:08.944 22:24:09 -- common/autotest_common.sh@10 -- # set +x 00:04:08.944 22:24:09 -- spdk/autotest.sh@70 -- # create_test_list 00:04:08.944 22:24:09 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:08.944 22:24:09 -- common/autotest_common.sh@10 -- # set +x 00:04:08.944 22:24:09 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:08.944 22:24:09 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:08.944 22:24:09 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:08.944 22:24:09 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:08.944 22:24:09 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:08.944 22:24:09 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:08.944 22:24:09 -- common/autotest_common.sh@1450 -- # uname 00:04:08.944 22:24:09 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:04:08.944 22:24:09 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:08.944 22:24:09 -- common/autotest_common.sh@1470 -- # uname 00:04:08.944 22:24:09 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:04:08.944 22:24:09 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:04:08.944 22:24:09 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:09.203 lcov: LCOV version 1.15 00:04:09.203 22:24:09 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:15.766 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:15.766 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:15.766 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:15.766 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:15.766 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:15.766 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:33.851 22:24:33 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:04:33.851 22:24:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:33.851 22:24:33 -- common/autotest_common.sh@10 -- # set +x 00:04:33.851 22:24:33 -- spdk/autotest.sh@89 -- # rm -f 00:04:33.851 22:24:33 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:33.851 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:34.109 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:34.109 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:34.109 22:24:34 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:04:34.109 22:24:34 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:34.109 22:24:34 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:34.109 22:24:34 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:34.109 22:24:34 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:34.109 22:24:34 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:34.109 22:24:34 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:34.109 22:24:34 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:34.109 22:24:34 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:34.109 22:24:34 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:34.109 22:24:34 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:34.109 22:24:34 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:34.109 22:24:34 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:34.109 22:24:34 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:34.109 22:24:34 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:34.109 22:24:34 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:34.109 22:24:34 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:34.109 22:24:34 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:34.109 22:24:34 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:34.109 22:24:34 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:34.109 22:24:34 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:34.109 22:24:34 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:34.109 22:24:34 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:34.109 22:24:34 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:34.109 22:24:34 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:04:34.109 22:24:34 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:04:34.109 22:24:34 -- spdk/autotest.sh@108 -- # grep -v p 00:04:34.109 22:24:34 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:34.109 22:24:34 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:34.109 22:24:34 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:04:34.109 22:24:34 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:34.109 22:24:34 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:34.109 No valid GPT data, bailing 00:04:34.109 22:24:34 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:34.109 22:24:34 -- scripts/common.sh@393 -- # pt= 00:04:34.109 22:24:34 -- scripts/common.sh@394 -- # return 1 00:04:34.109 22:24:34 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:34.109 1+0 records in 00:04:34.109 1+0 records out 00:04:34.109 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00443382 s, 236 MB/s 00:04:34.109 22:24:34 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:34.109 22:24:34 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:34.109 22:24:34 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:04:34.109 22:24:34 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:34.109 22:24:34 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:34.109 No valid GPT data, bailing 00:04:34.109 22:24:34 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:34.109 22:24:34 -- scripts/common.sh@393 -- # pt= 00:04:34.109 22:24:34 -- scripts/common.sh@394 -- # return 1 00:04:34.109 22:24:34 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:34.109 1+0 records in 00:04:34.109 1+0 records out 00:04:34.109 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00519652 s, 202 MB/s 00:04:34.109 22:24:34 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:34.109 22:24:34 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:34.109 22:24:34 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:04:34.109 22:24:34 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:04:34.109 22:24:34 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:34.367 No valid GPT data, bailing 00:04:34.367 22:24:34 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:34.367 22:24:34 -- scripts/common.sh@393 -- # pt= 00:04:34.367 22:24:34 -- scripts/common.sh@394 -- # return 1 00:04:34.367 22:24:34 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:34.367 1+0 records in 00:04:34.367 1+0 records out 00:04:34.367 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00465367 s, 225 MB/s 00:04:34.367 22:24:34 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:34.367 22:24:34 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:34.367 22:24:34 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:04:34.367 22:24:34 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:04:34.367 22:24:34 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:34.367 No valid GPT data, bailing 00:04:34.367 22:24:34 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:34.367 22:24:34 -- scripts/common.sh@393 -- # pt= 00:04:34.367 22:24:34 -- scripts/common.sh@394 -- # return 1 00:04:34.367 22:24:34 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:34.367 1+0 records in 00:04:34.367 1+0 records out 00:04:34.367 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00480253 s, 218 MB/s 00:04:34.367 22:24:34 -- spdk/autotest.sh@116 -- # sync 00:04:34.625 22:24:35 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:34.625 22:24:35 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:34.625 22:24:35 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:37.161 22:24:37 -- spdk/autotest.sh@122 -- # uname -s 00:04:37.161 22:24:37 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:04:37.161 22:24:37 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:37.161 22:24:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:37.161 22:24:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:37.161 22:24:37 -- common/autotest_common.sh@10 -- # set +x 00:04:37.161 ************************************ 00:04:37.161 START TEST setup.sh 00:04:37.161 ************************************ 00:04:37.161 22:24:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:37.161 * Looking for test storage... 00:04:37.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:37.161 22:24:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:37.161 22:24:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:37.161 22:24:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:37.161 22:24:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:37.161 22:24:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:37.161 22:24:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:37.161 22:24:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:37.161 22:24:37 -- scripts/common.sh@335 -- # IFS=.-: 00:04:37.161 22:24:37 -- scripts/common.sh@335 -- # read -ra ver1 00:04:37.161 22:24:37 -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.161 22:24:37 -- scripts/common.sh@336 -- # read -ra ver2 00:04:37.161 22:24:37 -- scripts/common.sh@337 -- # local 'op=<' 00:04:37.161 22:24:37 -- scripts/common.sh@339 -- # ver1_l=2 00:04:37.161 22:24:37 -- scripts/common.sh@340 -- # ver2_l=1 00:04:37.161 22:24:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:37.161 22:24:37 -- scripts/common.sh@343 -- # case "$op" in 00:04:37.161 22:24:37 -- scripts/common.sh@344 -- # : 1 00:04:37.161 22:24:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:37.161 22:24:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.161 22:24:37 -- scripts/common.sh@364 -- # decimal 1 00:04:37.161 22:24:37 -- scripts/common.sh@352 -- # local d=1 00:04:37.161 22:24:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.161 22:24:37 -- scripts/common.sh@354 -- # echo 1 00:04:37.161 22:24:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:37.161 22:24:37 -- scripts/common.sh@365 -- # decimal 2 00:04:37.161 22:24:37 -- scripts/common.sh@352 -- # local d=2 00:04:37.161 22:24:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.161 22:24:37 -- scripts/common.sh@354 -- # echo 2 00:04:37.161 22:24:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:37.161 22:24:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:37.161 22:24:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:37.161 22:24:37 -- scripts/common.sh@367 -- # return 0 00:04:37.161 22:24:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.161 22:24:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:37.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.161 --rc genhtml_branch_coverage=1 00:04:37.161 --rc genhtml_function_coverage=1 00:04:37.161 --rc genhtml_legend=1 00:04:37.161 --rc geninfo_all_blocks=1 00:04:37.161 --rc geninfo_unexecuted_blocks=1 00:04:37.161 00:04:37.161 ' 00:04:37.161 22:24:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:37.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.161 --rc genhtml_branch_coverage=1 00:04:37.161 --rc genhtml_function_coverage=1 00:04:37.161 --rc genhtml_legend=1 00:04:37.161 --rc geninfo_all_blocks=1 00:04:37.161 --rc geninfo_unexecuted_blocks=1 00:04:37.161 00:04:37.161 ' 00:04:37.161 22:24:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:37.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.161 --rc genhtml_branch_coverage=1 00:04:37.161 --rc genhtml_function_coverage=1 00:04:37.161 --rc genhtml_legend=1 00:04:37.161 --rc geninfo_all_blocks=1 00:04:37.161 --rc geninfo_unexecuted_blocks=1 00:04:37.161 00:04:37.161 ' 00:04:37.161 22:24:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:37.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.161 --rc genhtml_branch_coverage=1 00:04:37.161 --rc genhtml_function_coverage=1 00:04:37.161 --rc genhtml_legend=1 00:04:37.161 --rc geninfo_all_blocks=1 00:04:37.161 --rc geninfo_unexecuted_blocks=1 00:04:37.161 00:04:37.161 ' 00:04:37.161 22:24:37 -- setup/test-setup.sh@10 -- # uname -s 00:04:37.161 22:24:37 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:37.161 22:24:37 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:37.161 22:24:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:37.161 22:24:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:37.161 22:24:37 -- common/autotest_common.sh@10 -- # set +x 00:04:37.161 ************************************ 00:04:37.161 START TEST acl 00:04:37.161 ************************************ 00:04:37.161 22:24:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:37.161 * Looking for test storage... 00:04:37.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:37.161 22:24:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:37.161 22:24:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:37.161 22:24:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:37.420 22:24:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:37.420 22:24:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:37.420 22:24:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:37.420 22:24:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:37.420 22:24:37 -- scripts/common.sh@335 -- # IFS=.-: 00:04:37.420 22:24:37 -- scripts/common.sh@335 -- # read -ra ver1 00:04:37.420 22:24:37 -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.420 22:24:37 -- scripts/common.sh@336 -- # read -ra ver2 00:04:37.420 22:24:37 -- scripts/common.sh@337 -- # local 'op=<' 00:04:37.420 22:24:37 -- scripts/common.sh@339 -- # ver1_l=2 00:04:37.420 22:24:37 -- scripts/common.sh@340 -- # ver2_l=1 00:04:37.420 22:24:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:37.420 22:24:37 -- scripts/common.sh@343 -- # case "$op" in 00:04:37.420 22:24:37 -- scripts/common.sh@344 -- # : 1 00:04:37.420 22:24:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:37.420 22:24:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.420 22:24:37 -- scripts/common.sh@364 -- # decimal 1 00:04:37.420 22:24:37 -- scripts/common.sh@352 -- # local d=1 00:04:37.420 22:24:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.420 22:24:37 -- scripts/common.sh@354 -- # echo 1 00:04:37.420 22:24:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:37.420 22:24:37 -- scripts/common.sh@365 -- # decimal 2 00:04:37.420 22:24:37 -- scripts/common.sh@352 -- # local d=2 00:04:37.420 22:24:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.420 22:24:37 -- scripts/common.sh@354 -- # echo 2 00:04:37.420 22:24:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:37.420 22:24:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:37.420 22:24:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:37.420 22:24:37 -- scripts/common.sh@367 -- # return 0 00:04:37.420 22:24:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.420 22:24:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:37.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.420 --rc genhtml_branch_coverage=1 00:04:37.420 --rc genhtml_function_coverage=1 00:04:37.420 --rc genhtml_legend=1 00:04:37.420 --rc geninfo_all_blocks=1 00:04:37.420 --rc geninfo_unexecuted_blocks=1 00:04:37.420 00:04:37.420 ' 00:04:37.420 22:24:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:37.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.420 --rc genhtml_branch_coverage=1 00:04:37.420 --rc genhtml_function_coverage=1 00:04:37.420 --rc genhtml_legend=1 00:04:37.420 --rc geninfo_all_blocks=1 00:04:37.420 --rc geninfo_unexecuted_blocks=1 00:04:37.420 00:04:37.420 ' 00:04:37.420 22:24:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:37.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.420 --rc genhtml_branch_coverage=1 00:04:37.421 --rc genhtml_function_coverage=1 00:04:37.421 --rc genhtml_legend=1 00:04:37.421 --rc geninfo_all_blocks=1 00:04:37.421 --rc geninfo_unexecuted_blocks=1 00:04:37.421 00:04:37.421 ' 00:04:37.421 22:24:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:37.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.421 --rc genhtml_branch_coverage=1 00:04:37.421 --rc genhtml_function_coverage=1 00:04:37.421 --rc genhtml_legend=1 00:04:37.421 --rc geninfo_all_blocks=1 00:04:37.421 --rc geninfo_unexecuted_blocks=1 00:04:37.421 00:04:37.421 ' 00:04:37.421 22:24:37 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:37.421 22:24:37 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:37.421 22:24:37 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:37.421 22:24:37 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:37.421 22:24:37 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:37.421 22:24:37 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:37.421 22:24:37 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:37.421 22:24:37 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:37.421 22:24:37 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:37.421 22:24:37 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:37.421 22:24:37 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:37.421 22:24:37 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:37.421 22:24:37 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:37.421 22:24:37 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:37.421 22:24:37 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:37.421 22:24:37 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:37.421 22:24:37 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:37.421 22:24:37 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:37.421 22:24:37 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:37.421 22:24:37 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:37.421 22:24:37 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:37.421 22:24:37 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:37.421 22:24:37 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:37.421 22:24:37 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:37.421 22:24:37 -- setup/acl.sh@12 -- # devs=() 00:04:37.421 22:24:37 -- setup/acl.sh@12 -- # declare -a devs 00:04:37.421 22:24:37 -- setup/acl.sh@13 -- # drivers=() 00:04:37.421 22:24:37 -- setup/acl.sh@13 -- # declare -A drivers 00:04:37.421 22:24:37 -- setup/acl.sh@51 -- # setup reset 00:04:37.421 22:24:37 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.421 22:24:37 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:38.357 22:24:38 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:38.357 22:24:38 -- setup/acl.sh@16 -- # local dev driver 00:04:38.357 22:24:38 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.357 22:24:38 -- setup/acl.sh@15 -- # setup output status 00:04:38.357 22:24:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.357 22:24:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:38.357 Hugepages 00:04:38.357 node hugesize free / total 00:04:38.357 22:24:38 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:38.357 22:24:38 -- setup/acl.sh@19 -- # continue 00:04:38.357 22:24:38 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.357 00:04:38.357 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:38.357 22:24:38 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:38.357 22:24:38 -- setup/acl.sh@19 -- # continue 00:04:38.357 22:24:38 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.357 22:24:38 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:38.357 22:24:38 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:38.357 22:24:38 -- setup/acl.sh@20 -- # continue 00:04:38.357 22:24:38 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.357 22:24:39 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:38.357 22:24:39 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:38.357 22:24:39 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:38.357 22:24:39 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:38.357 22:24:39 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:38.357 22:24:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.620 22:24:39 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:38.620 22:24:39 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:38.620 22:24:39 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:38.620 22:24:39 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:38.620 22:24:39 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:38.620 22:24:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.620 22:24:39 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:38.620 22:24:39 -- setup/acl.sh@54 -- # run_test denied denied 00:04:38.620 22:24:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.620 22:24:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.620 22:24:39 -- common/autotest_common.sh@10 -- # set +x 00:04:38.620 ************************************ 00:04:38.620 START TEST denied 00:04:38.620 ************************************ 00:04:38.620 22:24:39 -- common/autotest_common.sh@1114 -- # denied 00:04:38.620 22:24:39 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:38.620 22:24:39 -- setup/acl.sh@38 -- # setup output config 00:04:38.620 22:24:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.620 22:24:39 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:38.620 22:24:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:39.558 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:39.558 22:24:40 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:39.558 22:24:40 -- setup/acl.sh@28 -- # local dev driver 00:04:39.558 22:24:40 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:39.558 22:24:40 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:39.558 22:24:40 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:39.558 22:24:40 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:39.558 22:24:40 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:39.558 22:24:40 -- setup/acl.sh@41 -- # setup reset 00:04:39.558 22:24:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:39.558 22:24:40 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:40.126 00:04:40.126 real 0m1.618s 00:04:40.126 user 0m0.612s 00:04:40.126 sys 0m0.937s 00:04:40.126 22:24:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:40.126 22:24:40 -- common/autotest_common.sh@10 -- # set +x 00:04:40.126 ************************************ 00:04:40.126 END TEST denied 00:04:40.126 ************************************ 00:04:40.126 22:24:40 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:40.126 22:24:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:40.126 22:24:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:40.126 22:24:40 -- common/autotest_common.sh@10 -- # set +x 00:04:40.126 ************************************ 00:04:40.126 START TEST allowed 00:04:40.126 ************************************ 00:04:40.126 22:24:40 -- common/autotest_common.sh@1114 -- # allowed 00:04:40.126 22:24:40 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:40.126 22:24:40 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:40.126 22:24:40 -- setup/acl.sh@45 -- # setup output config 00:04:40.126 22:24:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.126 22:24:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:41.063 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:41.063 22:24:41 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:04:41.063 22:24:41 -- setup/acl.sh@28 -- # local dev driver 00:04:41.063 22:24:41 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:41.063 22:24:41 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:41.063 22:24:41 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:41.063 22:24:41 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:41.063 22:24:41 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:41.063 22:24:41 -- setup/acl.sh@48 -- # setup reset 00:04:41.063 22:24:41 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:41.063 22:24:41 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:42.050 00:04:42.050 real 0m1.647s 00:04:42.050 user 0m0.722s 00:04:42.050 sys 0m0.917s 00:04:42.050 22:24:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:42.050 22:24:42 -- common/autotest_common.sh@10 -- # set +x 00:04:42.050 ************************************ 00:04:42.050 END TEST allowed 00:04:42.050 ************************************ 00:04:42.050 00:04:42.050 real 0m4.772s 00:04:42.050 user 0m2.019s 00:04:42.050 sys 0m2.699s 00:04:42.050 22:24:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:42.050 22:24:42 -- common/autotest_common.sh@10 -- # set +x 00:04:42.050 ************************************ 00:04:42.050 END TEST acl 00:04:42.050 ************************************ 00:04:42.050 22:24:42 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:42.050 22:24:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.050 22:24:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.050 22:24:42 -- common/autotest_common.sh@10 -- # set +x 00:04:42.050 ************************************ 00:04:42.050 START TEST hugepages 00:04:42.050 ************************************ 00:04:42.050 22:24:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:42.050 * Looking for test storage... 00:04:42.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:42.050 22:24:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:42.050 22:24:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:42.050 22:24:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:42.050 22:24:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:42.050 22:24:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:42.050 22:24:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:42.050 22:24:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:42.050 22:24:42 -- scripts/common.sh@335 -- # IFS=.-: 00:04:42.050 22:24:42 -- scripts/common.sh@335 -- # read -ra ver1 00:04:42.050 22:24:42 -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.050 22:24:42 -- scripts/common.sh@336 -- # read -ra ver2 00:04:42.050 22:24:42 -- scripts/common.sh@337 -- # local 'op=<' 00:04:42.050 22:24:42 -- scripts/common.sh@339 -- # ver1_l=2 00:04:42.050 22:24:42 -- scripts/common.sh@340 -- # ver2_l=1 00:04:42.050 22:24:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:42.050 22:24:42 -- scripts/common.sh@343 -- # case "$op" in 00:04:42.050 22:24:42 -- scripts/common.sh@344 -- # : 1 00:04:42.050 22:24:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:42.050 22:24:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.050 22:24:42 -- scripts/common.sh@364 -- # decimal 1 00:04:42.050 22:24:42 -- scripts/common.sh@352 -- # local d=1 00:04:42.050 22:24:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.050 22:24:42 -- scripts/common.sh@354 -- # echo 1 00:04:42.050 22:24:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:42.050 22:24:42 -- scripts/common.sh@365 -- # decimal 2 00:04:42.050 22:24:42 -- scripts/common.sh@352 -- # local d=2 00:04:42.050 22:24:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.050 22:24:42 -- scripts/common.sh@354 -- # echo 2 00:04:42.050 22:24:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:42.050 22:24:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:42.050 22:24:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:42.050 22:24:42 -- scripts/common.sh@367 -- # return 0 00:04:42.050 22:24:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.050 22:24:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:42.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.050 --rc genhtml_branch_coverage=1 00:04:42.050 --rc genhtml_function_coverage=1 00:04:42.050 --rc genhtml_legend=1 00:04:42.050 --rc geninfo_all_blocks=1 00:04:42.050 --rc geninfo_unexecuted_blocks=1 00:04:42.050 00:04:42.050 ' 00:04:42.050 22:24:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:42.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.050 --rc genhtml_branch_coverage=1 00:04:42.050 --rc genhtml_function_coverage=1 00:04:42.050 --rc genhtml_legend=1 00:04:42.050 --rc geninfo_all_blocks=1 00:04:42.050 --rc geninfo_unexecuted_blocks=1 00:04:42.050 00:04:42.050 ' 00:04:42.050 22:24:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:42.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.050 --rc genhtml_branch_coverage=1 00:04:42.050 --rc genhtml_function_coverage=1 00:04:42.050 --rc genhtml_legend=1 00:04:42.050 --rc geninfo_all_blocks=1 00:04:42.050 --rc geninfo_unexecuted_blocks=1 00:04:42.050 00:04:42.050 ' 00:04:42.050 22:24:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:42.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.050 --rc genhtml_branch_coverage=1 00:04:42.050 --rc genhtml_function_coverage=1 00:04:42.050 --rc genhtml_legend=1 00:04:42.050 --rc geninfo_all_blocks=1 00:04:42.050 --rc geninfo_unexecuted_blocks=1 00:04:42.050 00:04:42.050 ' 00:04:42.050 22:24:42 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:42.050 22:24:42 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:42.050 22:24:42 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:42.050 22:24:42 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:42.050 22:24:42 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:42.320 22:24:42 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:42.320 22:24:42 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:42.320 22:24:42 -- setup/common.sh@18 -- # local node= 00:04:42.320 22:24:42 -- setup/common.sh@19 -- # local var val 00:04:42.320 22:24:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.320 22:24:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.320 22:24:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.320 22:24:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.320 22:24:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.320 22:24:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 22:24:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 4722312 kB' 'MemAvailable: 7357144 kB' 'Buffers: 2684 kB' 'Cached: 2836420 kB' 'SwapCached: 0 kB' 'Active: 496320 kB' 'Inactive: 2459440 kB' 'Active(anon): 127168 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459440 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 316 kB' 'Writeback: 0 kB' 'AnonPages: 118244 kB' 'Mapped: 51128 kB' 'Shmem: 10512 kB' 'KReclaimable: 86372 kB' 'Slab: 187972 kB' 'SReclaimable: 86372 kB' 'SUnreclaim: 101600 kB' 'KernelStack: 6880 kB' 'PageTables: 4548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411008 kB' 'Committed_AS: 309936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.320 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # continue 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 22:24:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 22:24:42 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.321 22:24:42 -- setup/common.sh@33 -- # echo 2048 00:04:42.321 22:24:42 -- setup/common.sh@33 -- # return 0 00:04:42.321 22:24:42 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:42.321 22:24:42 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:42.321 22:24:42 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:42.321 22:24:42 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:42.321 22:24:42 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:42.321 22:24:42 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:42.321 22:24:42 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:42.321 22:24:42 -- setup/hugepages.sh@207 -- # get_nodes 00:04:42.321 22:24:42 -- setup/hugepages.sh@27 -- # local node 00:04:42.321 22:24:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.321 22:24:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:42.321 22:24:42 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:42.321 22:24:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.322 22:24:42 -- setup/hugepages.sh@208 -- # clear_hp 00:04:42.322 22:24:42 -- setup/hugepages.sh@37 -- # local node hp 00:04:42.322 22:24:42 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:42.322 22:24:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:42.322 22:24:42 -- setup/hugepages.sh@41 -- # echo 0 00:04:42.322 22:24:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:42.322 22:24:42 -- setup/hugepages.sh@41 -- # echo 0 00:04:42.322 22:24:42 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:42.322 22:24:42 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:42.322 22:24:42 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:42.322 22:24:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.322 22:24:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.322 22:24:42 -- common/autotest_common.sh@10 -- # set +x 00:04:42.322 ************************************ 00:04:42.322 START TEST default_setup 00:04:42.322 ************************************ 00:04:42.322 22:24:42 -- common/autotest_common.sh@1114 -- # default_setup 00:04:42.322 22:24:42 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:42.322 22:24:42 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:42.322 22:24:42 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:42.322 22:24:42 -- setup/hugepages.sh@51 -- # shift 00:04:42.322 22:24:42 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:42.322 22:24:42 -- setup/hugepages.sh@52 -- # local node_ids 00:04:42.322 22:24:42 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:42.322 22:24:42 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:42.322 22:24:42 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:42.322 22:24:42 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:42.322 22:24:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.322 22:24:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:42.322 22:24:42 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:42.322 22:24:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.322 22:24:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.322 22:24:42 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:42.322 22:24:42 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:42.322 22:24:42 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:42.322 22:24:42 -- setup/hugepages.sh@73 -- # return 0 00:04:42.322 22:24:42 -- setup/hugepages.sh@137 -- # setup output 00:04:42.322 22:24:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.322 22:24:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:42.889 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:43.152 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.152 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.152 22:24:43 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:43.152 22:24:43 -- setup/hugepages.sh@89 -- # local node 00:04:43.152 22:24:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.152 22:24:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.152 22:24:43 -- setup/hugepages.sh@92 -- # local surp 00:04:43.152 22:24:43 -- setup/hugepages.sh@93 -- # local resv 00:04:43.152 22:24:43 -- setup/hugepages.sh@94 -- # local anon 00:04:43.152 22:24:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:43.152 22:24:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.152 22:24:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.152 22:24:43 -- setup/common.sh@18 -- # local node= 00:04:43.152 22:24:43 -- setup/common.sh@19 -- # local var val 00:04:43.152 22:24:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.152 22:24:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.152 22:24:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.152 22:24:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.152 22:24:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.152 22:24:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6832208 kB' 'MemAvailable: 9466824 kB' 'Buffers: 2684 kB' 'Cached: 2836412 kB' 'SwapCached: 0 kB' 'Active: 497844 kB' 'Inactive: 2459444 kB' 'Active(anon): 128692 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119796 kB' 'Mapped: 50976 kB' 'Shmem: 10488 kB' 'KReclaimable: 85936 kB' 'Slab: 187540 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101604 kB' 'KernelStack: 6864 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 312072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.152 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.152 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.153 22:24:43 -- setup/common.sh@33 -- # echo 0 00:04:43.153 22:24:43 -- setup/common.sh@33 -- # return 0 00:04:43.153 22:24:43 -- setup/hugepages.sh@97 -- # anon=0 00:04:43.153 22:24:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:43.153 22:24:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.153 22:24:43 -- setup/common.sh@18 -- # local node= 00:04:43.153 22:24:43 -- setup/common.sh@19 -- # local var val 00:04:43.153 22:24:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.153 22:24:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.153 22:24:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.153 22:24:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.153 22:24:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.153 22:24:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6832208 kB' 'MemAvailable: 9466824 kB' 'Buffers: 2684 kB' 'Cached: 2836412 kB' 'SwapCached: 0 kB' 'Active: 497348 kB' 'Inactive: 2459444 kB' 'Active(anon): 128196 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119288 kB' 'Mapped: 50852 kB' 'Shmem: 10488 kB' 'KReclaimable: 85936 kB' 'Slab: 187536 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101600 kB' 'KernelStack: 6816 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 312072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.153 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.153 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.154 22:24:43 -- setup/common.sh@33 -- # echo 0 00:04:43.154 22:24:43 -- setup/common.sh@33 -- # return 0 00:04:43.154 22:24:43 -- setup/hugepages.sh@99 -- # surp=0 00:04:43.154 22:24:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:43.154 22:24:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:43.154 22:24:43 -- setup/common.sh@18 -- # local node= 00:04:43.154 22:24:43 -- setup/common.sh@19 -- # local var val 00:04:43.154 22:24:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.154 22:24:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.154 22:24:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.154 22:24:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.154 22:24:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.154 22:24:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6832208 kB' 'MemAvailable: 9466824 kB' 'Buffers: 2684 kB' 'Cached: 2836412 kB' 'SwapCached: 0 kB' 'Active: 497288 kB' 'Inactive: 2459444 kB' 'Active(anon): 128136 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119264 kB' 'Mapped: 50852 kB' 'Shmem: 10488 kB' 'KReclaimable: 85936 kB' 'Slab: 187536 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101600 kB' 'KernelStack: 6816 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 312072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.154 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.154 22:24:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.155 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.155 22:24:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.156 22:24:43 -- setup/common.sh@33 -- # echo 0 00:04:43.156 22:24:43 -- setup/common.sh@33 -- # return 0 00:04:43.156 22:24:43 -- setup/hugepages.sh@100 -- # resv=0 00:04:43.156 nr_hugepages=1024 00:04:43.156 22:24:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:43.156 resv_hugepages=0 00:04:43.156 22:24:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:43.156 surplus_hugepages=0 00:04:43.156 22:24:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:43.156 anon_hugepages=0 00:04:43.156 22:24:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:43.156 22:24:43 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.156 22:24:43 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:43.156 22:24:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.156 22:24:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.156 22:24:43 -- setup/common.sh@18 -- # local node= 00:04:43.156 22:24:43 -- setup/common.sh@19 -- # local var val 00:04:43.156 22:24:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.156 22:24:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.156 22:24:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.156 22:24:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.156 22:24:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.156 22:24:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.156 22:24:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6832844 kB' 'MemAvailable: 9467460 kB' 'Buffers: 2684 kB' 'Cached: 2836412 kB' 'SwapCached: 0 kB' 'Active: 497524 kB' 'Inactive: 2459444 kB' 'Active(anon): 128372 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119500 kB' 'Mapped: 50852 kB' 'Shmem: 10488 kB' 'KReclaimable: 85936 kB' 'Slab: 187540 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101604 kB' 'KernelStack: 6816 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 312072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.156 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.156 22:24:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.157 22:24:43 -- setup/common.sh@33 -- # echo 1024 00:04:43.157 22:24:43 -- setup/common.sh@33 -- # return 0 00:04:43.157 22:24:43 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.157 22:24:43 -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.157 22:24:43 -- setup/hugepages.sh@27 -- # local node 00:04:43.157 22:24:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.157 22:24:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:43.157 22:24:43 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:43.157 22:24:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.157 22:24:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.157 22:24:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.157 22:24:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.157 22:24:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.157 22:24:43 -- setup/common.sh@18 -- # local node=0 00:04:43.157 22:24:43 -- setup/common.sh@19 -- # local var val 00:04:43.157 22:24:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.157 22:24:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.157 22:24:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.157 22:24:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.157 22:24:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.157 22:24:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6832844 kB' 'MemUsed: 5406268 kB' 'SwapCached: 0 kB' 'Active: 497316 kB' 'Inactive: 2459444 kB' 'Active(anon): 128164 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 2839096 kB' 'Mapped: 50904 kB' 'AnonPages: 119300 kB' 'Shmem: 10488 kB' 'KernelStack: 6800 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85936 kB' 'Slab: 187540 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.157 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.157 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.158 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.158 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.418 22:24:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.418 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.418 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.418 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.418 22:24:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.418 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.418 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.418 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.418 22:24:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.418 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.418 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.418 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.418 22:24:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.418 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.418 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.418 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.418 22:24:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.418 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.418 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.418 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.418 22:24:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.418 22:24:43 -- setup/common.sh@32 -- # continue 00:04:43.418 22:24:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.418 22:24:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.418 22:24:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.418 22:24:43 -- setup/common.sh@33 -- # echo 0 00:04:43.418 22:24:43 -- setup/common.sh@33 -- # return 0 00:04:43.418 22:24:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.418 22:24:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.418 22:24:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.418 22:24:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.418 node0=1024 expecting 1024 00:04:43.418 22:24:43 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:43.418 22:24:43 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:43.418 00:04:43.418 real 0m1.046s 00:04:43.418 user 0m0.484s 00:04:43.418 sys 0m0.503s 00:04:43.418 22:24:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:43.418 22:24:43 -- common/autotest_common.sh@10 -- # set +x 00:04:43.418 ************************************ 00:04:43.418 END TEST default_setup 00:04:43.418 ************************************ 00:04:43.418 22:24:43 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:43.418 22:24:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.418 22:24:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.418 22:24:43 -- common/autotest_common.sh@10 -- # set +x 00:04:43.418 ************************************ 00:04:43.418 START TEST per_node_1G_alloc 00:04:43.418 ************************************ 00:04:43.418 22:24:43 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:43.418 22:24:43 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:43.418 22:24:43 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:43.418 22:24:43 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:43.418 22:24:43 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:43.418 22:24:43 -- setup/hugepages.sh@51 -- # shift 00:04:43.418 22:24:43 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:43.418 22:24:43 -- setup/hugepages.sh@52 -- # local node_ids 00:04:43.418 22:24:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.418 22:24:43 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:43.418 22:24:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:43.418 22:24:43 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:43.418 22:24:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.418 22:24:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:43.418 22:24:43 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:43.418 22:24:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.418 22:24:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.418 22:24:43 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:43.418 22:24:43 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:43.418 22:24:43 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:43.418 22:24:43 -- setup/hugepages.sh@73 -- # return 0 00:04:43.418 22:24:43 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:43.418 22:24:43 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:43.418 22:24:43 -- setup/hugepages.sh@146 -- # setup output 00:04:43.418 22:24:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.418 22:24:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:43.678 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:43.678 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:43.678 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:43.678 22:24:44 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:43.678 22:24:44 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:43.678 22:24:44 -- setup/hugepages.sh@89 -- # local node 00:04:43.678 22:24:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.678 22:24:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.678 22:24:44 -- setup/hugepages.sh@92 -- # local surp 00:04:43.678 22:24:44 -- setup/hugepages.sh@93 -- # local resv 00:04:43.678 22:24:44 -- setup/hugepages.sh@94 -- # local anon 00:04:43.678 22:24:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:43.678 22:24:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.678 22:24:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.678 22:24:44 -- setup/common.sh@18 -- # local node= 00:04:43.678 22:24:44 -- setup/common.sh@19 -- # local var val 00:04:43.678 22:24:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.678 22:24:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.678 22:24:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.678 22:24:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.678 22:24:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.678 22:24:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.678 22:24:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7879328 kB' 'MemAvailable: 10513956 kB' 'Buffers: 2684 kB' 'Cached: 2836412 kB' 'SwapCached: 0 kB' 'Active: 497968 kB' 'Inactive: 2459456 kB' 'Active(anon): 128816 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459456 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119912 kB' 'Mapped: 51060 kB' 'Shmem: 10488 kB' 'KReclaimable: 85936 kB' 'Slab: 187568 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101632 kB' 'KernelStack: 6824 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 312072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.678 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.678 22:24:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.679 22:24:44 -- setup/common.sh@33 -- # echo 0 00:04:43.679 22:24:44 -- setup/common.sh@33 -- # return 0 00:04:43.679 22:24:44 -- setup/hugepages.sh@97 -- # anon=0 00:04:43.679 22:24:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:43.679 22:24:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.679 22:24:44 -- setup/common.sh@18 -- # local node= 00:04:43.679 22:24:44 -- setup/common.sh@19 -- # local var val 00:04:43.679 22:24:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.679 22:24:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.679 22:24:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.679 22:24:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.679 22:24:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.679 22:24:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7879328 kB' 'MemAvailable: 10513956 kB' 'Buffers: 2684 kB' 'Cached: 2836412 kB' 'SwapCached: 0 kB' 'Active: 497536 kB' 'Inactive: 2459456 kB' 'Active(anon): 128384 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459456 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119432 kB' 'Mapped: 50904 kB' 'Shmem: 10488 kB' 'KReclaimable: 85936 kB' 'Slab: 187580 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101644 kB' 'KernelStack: 6768 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 312072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.679 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.679 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.680 22:24:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.680 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.680 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.680 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.941 22:24:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.941 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.941 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.941 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.941 22:24:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.941 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.941 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.941 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.941 22:24:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.941 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.941 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.941 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.941 22:24:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.941 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.941 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.941 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.941 22:24:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.941 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.941 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.941 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.941 22:24:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.941 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.941 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.941 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.941 22:24:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.941 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.941 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.941 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.941 22:24:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.941 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.941 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.941 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.941 22:24:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.941 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.941 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.941 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.941 22:24:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.941 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.941 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.942 22:24:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.942 22:24:44 -- setup/common.sh@33 -- # echo 0 00:04:43.942 22:24:44 -- setup/common.sh@33 -- # return 0 00:04:43.942 22:24:44 -- setup/hugepages.sh@99 -- # surp=0 00:04:43.942 22:24:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:43.942 22:24:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:43.942 22:24:44 -- setup/common.sh@18 -- # local node= 00:04:43.942 22:24:44 -- setup/common.sh@19 -- # local var val 00:04:43.942 22:24:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.942 22:24:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.942 22:24:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.942 22:24:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.942 22:24:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.942 22:24:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.942 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7879328 kB' 'MemAvailable: 10513956 kB' 'Buffers: 2684 kB' 'Cached: 2836412 kB' 'SwapCached: 0 kB' 'Active: 497760 kB' 'Inactive: 2459456 kB' 'Active(anon): 128608 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459456 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119656 kB' 'Mapped: 50904 kB' 'Shmem: 10488 kB' 'KReclaimable: 85936 kB' 'Slab: 187568 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101632 kB' 'KernelStack: 6752 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 312072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.943 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.943 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.944 22:24:44 -- setup/common.sh@33 -- # echo 0 00:04:43.944 22:24:44 -- setup/common.sh@33 -- # return 0 00:04:43.944 22:24:44 -- setup/hugepages.sh@100 -- # resv=0 00:04:43.944 nr_hugepages=512 00:04:43.944 22:24:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:43.944 resv_hugepages=0 00:04:43.944 22:24:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:43.944 surplus_hugepages=0 00:04:43.944 22:24:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:43.944 anon_hugepages=0 00:04:43.944 22:24:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:43.944 22:24:44 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:43.944 22:24:44 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:43.944 22:24:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.944 22:24:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.944 22:24:44 -- setup/common.sh@18 -- # local node= 00:04:43.944 22:24:44 -- setup/common.sh@19 -- # local var val 00:04:43.944 22:24:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.944 22:24:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.944 22:24:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.944 22:24:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.944 22:24:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.944 22:24:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7879328 kB' 'MemAvailable: 10513956 kB' 'Buffers: 2684 kB' 'Cached: 2836412 kB' 'SwapCached: 0 kB' 'Active: 497296 kB' 'Inactive: 2459456 kB' 'Active(anon): 128144 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459456 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119196 kB' 'Mapped: 50904 kB' 'Shmem: 10488 kB' 'KReclaimable: 85936 kB' 'Slab: 187564 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101628 kB' 'KernelStack: 6736 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 312072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.944 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.944 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.945 22:24:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.945 22:24:44 -- setup/common.sh@33 -- # echo 512 00:04:43.945 22:24:44 -- setup/common.sh@33 -- # return 0 00:04:43.945 22:24:44 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:43.945 22:24:44 -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.945 22:24:44 -- setup/hugepages.sh@27 -- # local node 00:04:43.945 22:24:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.945 22:24:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:43.945 22:24:44 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:43.945 22:24:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.945 22:24:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.945 22:24:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.945 22:24:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.945 22:24:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.945 22:24:44 -- setup/common.sh@18 -- # local node=0 00:04:43.945 22:24:44 -- setup/common.sh@19 -- # local var val 00:04:43.945 22:24:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.945 22:24:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.945 22:24:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.945 22:24:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.945 22:24:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.945 22:24:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.945 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7879328 kB' 'MemUsed: 4359784 kB' 'SwapCached: 0 kB' 'Active: 497504 kB' 'Inactive: 2459456 kB' 'Active(anon): 128352 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459456 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 2839096 kB' 'Mapped: 50904 kB' 'AnonPages: 119432 kB' 'Shmem: 10488 kB' 'KernelStack: 6784 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85936 kB' 'Slab: 187564 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.946 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.946 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.947 22:24:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.947 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.947 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.947 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.947 22:24:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.947 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.947 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.947 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.947 22:24:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.947 22:24:44 -- setup/common.sh@32 -- # continue 00:04:43.947 22:24:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.947 22:24:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.947 22:24:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.947 22:24:44 -- setup/common.sh@33 -- # echo 0 00:04:43.947 22:24:44 -- setup/common.sh@33 -- # return 0 00:04:43.947 22:24:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.947 22:24:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.947 22:24:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.947 22:24:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.947 node0=512 expecting 512 00:04:43.947 22:24:44 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:43.947 22:24:44 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:43.947 00:04:43.947 real 0m0.578s 00:04:43.947 user 0m0.271s 00:04:43.947 sys 0m0.343s 00:04:43.947 22:24:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:43.947 22:24:44 -- common/autotest_common.sh@10 -- # set +x 00:04:43.947 ************************************ 00:04:43.947 END TEST per_node_1G_alloc 00:04:43.947 ************************************ 00:04:43.947 22:24:44 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:43.947 22:24:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.947 22:24:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.947 22:24:44 -- common/autotest_common.sh@10 -- # set +x 00:04:43.947 ************************************ 00:04:43.947 START TEST even_2G_alloc 00:04:43.947 ************************************ 00:04:43.947 22:24:44 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:43.947 22:24:44 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:43.947 22:24:44 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:43.947 22:24:44 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:43.947 22:24:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.947 22:24:44 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:43.947 22:24:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:43.947 22:24:44 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:43.947 22:24:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.947 22:24:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:43.947 22:24:44 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:43.947 22:24:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.947 22:24:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.947 22:24:44 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:43.947 22:24:44 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:43.947 22:24:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.947 22:24:44 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:43.947 22:24:44 -- setup/hugepages.sh@83 -- # : 0 00:04:43.947 22:24:44 -- setup/hugepages.sh@84 -- # : 0 00:04:43.947 22:24:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.947 22:24:44 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:43.947 22:24:44 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:43.947 22:24:44 -- setup/hugepages.sh@153 -- # setup output 00:04:43.947 22:24:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.947 22:24:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:44.205 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:44.469 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:44.469 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:44.469 22:24:44 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:44.469 22:24:44 -- setup/hugepages.sh@89 -- # local node 00:04:44.469 22:24:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:44.469 22:24:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:44.469 22:24:44 -- setup/hugepages.sh@92 -- # local surp 00:04:44.469 22:24:44 -- setup/hugepages.sh@93 -- # local resv 00:04:44.469 22:24:44 -- setup/hugepages.sh@94 -- # local anon 00:04:44.469 22:24:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:44.469 22:24:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:44.469 22:24:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:44.469 22:24:44 -- setup/common.sh@18 -- # local node= 00:04:44.469 22:24:44 -- setup/common.sh@19 -- # local var val 00:04:44.469 22:24:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.469 22:24:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.469 22:24:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.469 22:24:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.469 22:24:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.469 22:24:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.469 22:24:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6827228 kB' 'MemAvailable: 9461856 kB' 'Buffers: 2684 kB' 'Cached: 2836412 kB' 'SwapCached: 0 kB' 'Active: 497848 kB' 'Inactive: 2459456 kB' 'Active(anon): 128696 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459456 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119856 kB' 'Mapped: 51032 kB' 'Shmem: 10488 kB' 'KReclaimable: 85936 kB' 'Slab: 187624 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101688 kB' 'KernelStack: 6760 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 312072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.469 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.469 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.470 22:24:45 -- setup/common.sh@33 -- # echo 0 00:04:44.470 22:24:45 -- setup/common.sh@33 -- # return 0 00:04:44.470 22:24:45 -- setup/hugepages.sh@97 -- # anon=0 00:04:44.470 22:24:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.470 22:24:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.470 22:24:45 -- setup/common.sh@18 -- # local node= 00:04:44.470 22:24:45 -- setup/common.sh@19 -- # local var val 00:04:44.470 22:24:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.470 22:24:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.470 22:24:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.470 22:24:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.470 22:24:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.470 22:24:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6827228 kB' 'MemAvailable: 9461856 kB' 'Buffers: 2684 kB' 'Cached: 2836412 kB' 'SwapCached: 0 kB' 'Active: 497628 kB' 'Inactive: 2459456 kB' 'Active(anon): 128476 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459456 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119568 kB' 'Mapped: 50904 kB' 'Shmem: 10488 kB' 'KReclaimable: 85936 kB' 'Slab: 187660 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101724 kB' 'KernelStack: 6800 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 312072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.470 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.470 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.471 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.471 22:24:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.472 22:24:45 -- setup/common.sh@33 -- # echo 0 00:04:44.472 22:24:45 -- setup/common.sh@33 -- # return 0 00:04:44.472 22:24:45 -- setup/hugepages.sh@99 -- # surp=0 00:04:44.472 22:24:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:44.472 22:24:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:44.472 22:24:45 -- setup/common.sh@18 -- # local node= 00:04:44.472 22:24:45 -- setup/common.sh@19 -- # local var val 00:04:44.472 22:24:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.472 22:24:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.472 22:24:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.472 22:24:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.472 22:24:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.472 22:24:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.472 22:24:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6827228 kB' 'MemAvailable: 9461856 kB' 'Buffers: 2684 kB' 'Cached: 2836412 kB' 'SwapCached: 0 kB' 'Active: 497772 kB' 'Inactive: 2459456 kB' 'Active(anon): 128620 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459456 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119704 kB' 'Mapped: 50904 kB' 'Shmem: 10488 kB' 'KReclaimable: 85936 kB' 'Slab: 187656 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101720 kB' 'KernelStack: 6784 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 312072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.472 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.472 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.473 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.473 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.474 22:24:45 -- setup/common.sh@33 -- # echo 0 00:04:44.474 22:24:45 -- setup/common.sh@33 -- # return 0 00:04:44.474 22:24:45 -- setup/hugepages.sh@100 -- # resv=0 00:04:44.474 22:24:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:44.474 nr_hugepages=1024 00:04:44.474 22:24:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:44.474 resv_hugepages=0 00:04:44.474 22:24:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:44.474 surplus_hugepages=0 00:04:44.474 22:24:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:44.474 anon_hugepages=0 00:04:44.474 22:24:45 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.474 22:24:45 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:44.474 22:24:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:44.474 22:24:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:44.474 22:24:45 -- setup/common.sh@18 -- # local node= 00:04:44.474 22:24:45 -- setup/common.sh@19 -- # local var val 00:04:44.474 22:24:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.474 22:24:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.474 22:24:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.474 22:24:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.474 22:24:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.474 22:24:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.474 22:24:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6827480 kB' 'MemAvailable: 9462108 kB' 'Buffers: 2684 kB' 'Cached: 2836412 kB' 'SwapCached: 0 kB' 'Active: 497520 kB' 'Inactive: 2459456 kB' 'Active(anon): 128368 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459456 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119448 kB' 'Mapped: 50904 kB' 'Shmem: 10488 kB' 'KReclaimable: 85936 kB' 'Slab: 187656 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101720 kB' 'KernelStack: 6784 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 312072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.474 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.474 22:24:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.475 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.475 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.476 22:24:45 -- setup/common.sh@33 -- # echo 1024 00:04:44.476 22:24:45 -- setup/common.sh@33 -- # return 0 00:04:44.476 22:24:45 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.476 22:24:45 -- setup/hugepages.sh@112 -- # get_nodes 00:04:44.476 22:24:45 -- setup/hugepages.sh@27 -- # local node 00:04:44.476 22:24:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.476 22:24:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:44.476 22:24:45 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:44.476 22:24:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.476 22:24:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.476 22:24:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.476 22:24:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:44.476 22:24:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.476 22:24:45 -- setup/common.sh@18 -- # local node=0 00:04:44.476 22:24:45 -- setup/common.sh@19 -- # local var val 00:04:44.476 22:24:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.476 22:24:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.476 22:24:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:44.476 22:24:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:44.476 22:24:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.476 22:24:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.476 22:24:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6827480 kB' 'MemUsed: 5411632 kB' 'SwapCached: 0 kB' 'Active: 497364 kB' 'Inactive: 2459456 kB' 'Active(anon): 128212 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459456 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 2839096 kB' 'Mapped: 50904 kB' 'AnonPages: 119308 kB' 'Shmem: 10488 kB' 'KernelStack: 6800 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85936 kB' 'Slab: 187656 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101720 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.476 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.476 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.477 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.477 22:24:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.477 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.477 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.477 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.737 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 22:24:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.737 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.737 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 22:24:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.737 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.737 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 22:24:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.737 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.737 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 22:24:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.737 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.737 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 22:24:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.737 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.737 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 22:24:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.737 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.737 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 22:24:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.737 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.737 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 22:24:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.737 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.737 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 22:24:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.737 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.737 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 22:24:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.737 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.737 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 22:24:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.737 22:24:45 -- setup/common.sh@33 -- # echo 0 00:04:44.737 22:24:45 -- setup/common.sh@33 -- # return 0 00:04:44.737 22:24:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.737 22:24:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.737 22:24:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.737 22:24:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.737 node0=1024 expecting 1024 00:04:44.737 22:24:45 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:44.737 ************************************ 00:04:44.737 END TEST even_2G_alloc 00:04:44.737 ************************************ 00:04:44.737 22:24:45 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:44.737 00:04:44.737 real 0m0.650s 00:04:44.737 user 0m0.312s 00:04:44.737 sys 0m0.320s 00:04:44.737 22:24:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:44.737 22:24:45 -- common/autotest_common.sh@10 -- # set +x 00:04:44.737 22:24:45 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:44.737 22:24:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:44.737 22:24:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.737 22:24:45 -- common/autotest_common.sh@10 -- # set +x 00:04:44.737 ************************************ 00:04:44.737 START TEST odd_alloc 00:04:44.737 ************************************ 00:04:44.737 22:24:45 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:44.737 22:24:45 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:44.737 22:24:45 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:44.737 22:24:45 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:44.737 22:24:45 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:44.737 22:24:45 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:44.737 22:24:45 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:44.737 22:24:45 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:44.737 22:24:45 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:44.737 22:24:45 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:44.737 22:24:45 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:44.737 22:24:45 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:44.737 22:24:45 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:44.737 22:24:45 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:44.737 22:24:45 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:44.737 22:24:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.737 22:24:45 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:44.737 22:24:45 -- setup/hugepages.sh@83 -- # : 0 00:04:44.737 22:24:45 -- setup/hugepages.sh@84 -- # : 0 00:04:44.737 22:24:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.737 22:24:45 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:44.737 22:24:45 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:44.737 22:24:45 -- setup/hugepages.sh@160 -- # setup output 00:04:44.737 22:24:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.737 22:24:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:44.997 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:44.997 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:44.997 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:44.997 22:24:45 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:44.997 22:24:45 -- setup/hugepages.sh@89 -- # local node 00:04:44.997 22:24:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:44.997 22:24:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:44.997 22:24:45 -- setup/hugepages.sh@92 -- # local surp 00:04:44.997 22:24:45 -- setup/hugepages.sh@93 -- # local resv 00:04:44.997 22:24:45 -- setup/hugepages.sh@94 -- # local anon 00:04:44.997 22:24:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:44.997 22:24:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:44.997 22:24:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:44.997 22:24:45 -- setup/common.sh@18 -- # local node= 00:04:44.997 22:24:45 -- setup/common.sh@19 -- # local var val 00:04:44.997 22:24:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.997 22:24:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.997 22:24:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.997 22:24:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.997 22:24:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.997 22:24:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 22:24:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6828768 kB' 'MemAvailable: 9463396 kB' 'Buffers: 2684 kB' 'Cached: 2836412 kB' 'SwapCached: 0 kB' 'Active: 498196 kB' 'Inactive: 2459456 kB' 'Active(anon): 129044 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459456 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 120216 kB' 'Mapped: 50960 kB' 'Shmem: 10488 kB' 'KReclaimable: 85936 kB' 'Slab: 187664 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101728 kB' 'KernelStack: 6852 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 312072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.997 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.997 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.998 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.998 22:24:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.998 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.998 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.998 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.998 22:24:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.998 22:24:45 -- setup/common.sh@32 -- # continue 00:04:44.998 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.998 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.261 22:24:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.261 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.262 22:24:45 -- setup/common.sh@33 -- # echo 0 00:04:45.262 22:24:45 -- setup/common.sh@33 -- # return 0 00:04:45.262 22:24:45 -- setup/hugepages.sh@97 -- # anon=0 00:04:45.262 22:24:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:45.262 22:24:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.262 22:24:45 -- setup/common.sh@18 -- # local node= 00:04:45.262 22:24:45 -- setup/common.sh@19 -- # local var val 00:04:45.262 22:24:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.262 22:24:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.262 22:24:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.262 22:24:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.262 22:24:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.262 22:24:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6829020 kB' 'MemAvailable: 9463648 kB' 'Buffers: 2684 kB' 'Cached: 2836412 kB' 'SwapCached: 0 kB' 'Active: 497728 kB' 'Inactive: 2459456 kB' 'Active(anon): 128576 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459456 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119708 kB' 'Mapped: 50852 kB' 'Shmem: 10488 kB' 'KReclaimable: 85936 kB' 'Slab: 187664 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101728 kB' 'KernelStack: 6796 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 312072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.262 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.262 22:24:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.263 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.263 22:24:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.263 22:24:45 -- setup/common.sh@33 -- # echo 0 00:04:45.263 22:24:45 -- setup/common.sh@33 -- # return 0 00:04:45.263 22:24:45 -- setup/hugepages.sh@99 -- # surp=0 00:04:45.263 22:24:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:45.263 22:24:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:45.263 22:24:45 -- setup/common.sh@18 -- # local node= 00:04:45.263 22:24:45 -- setup/common.sh@19 -- # local var val 00:04:45.263 22:24:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.263 22:24:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.264 22:24:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.264 22:24:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.264 22:24:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.264 22:24:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6828768 kB' 'MemAvailable: 9463396 kB' 'Buffers: 2684 kB' 'Cached: 2836412 kB' 'SwapCached: 0 kB' 'Active: 497456 kB' 'Inactive: 2459456 kB' 'Active(anon): 128304 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459456 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119432 kB' 'Mapped: 50852 kB' 'Shmem: 10488 kB' 'KReclaimable: 85936 kB' 'Slab: 187660 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101724 kB' 'KernelStack: 6796 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 312072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.264 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.264 22:24:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.265 22:24:45 -- setup/common.sh@33 -- # echo 0 00:04:45.265 22:24:45 -- setup/common.sh@33 -- # return 0 00:04:45.265 22:24:45 -- setup/hugepages.sh@100 -- # resv=0 00:04:45.265 nr_hugepages=1025 00:04:45.265 22:24:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:45.265 resv_hugepages=0 00:04:45.265 surplus_hugepages=0 00:04:45.265 anon_hugepages=0 00:04:45.265 22:24:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:45.265 22:24:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:45.265 22:24:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:45.265 22:24:45 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:45.265 22:24:45 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:45.265 22:24:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:45.265 22:24:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:45.265 22:24:45 -- setup/common.sh@18 -- # local node= 00:04:45.265 22:24:45 -- setup/common.sh@19 -- # local var val 00:04:45.265 22:24:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.265 22:24:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.265 22:24:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.265 22:24:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.265 22:24:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.265 22:24:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6828768 kB' 'MemAvailable: 9463396 kB' 'Buffers: 2684 kB' 'Cached: 2836412 kB' 'SwapCached: 0 kB' 'Active: 497696 kB' 'Inactive: 2459456 kB' 'Active(anon): 128544 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459456 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119676 kB' 'Mapped: 50852 kB' 'Shmem: 10488 kB' 'KReclaimable: 85936 kB' 'Slab: 187656 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101720 kB' 'KernelStack: 6796 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 312072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.265 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.265 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.266 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.266 22:24:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.266 22:24:45 -- setup/common.sh@33 -- # echo 1025 00:04:45.266 22:24:45 -- setup/common.sh@33 -- # return 0 00:04:45.266 22:24:45 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:45.266 22:24:45 -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.266 22:24:45 -- setup/hugepages.sh@27 -- # local node 00:04:45.266 22:24:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.266 22:24:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:45.267 22:24:45 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:45.267 22:24:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.267 22:24:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.267 22:24:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.267 22:24:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.267 22:24:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.267 22:24:45 -- setup/common.sh@18 -- # local node=0 00:04:45.267 22:24:45 -- setup/common.sh@19 -- # local var val 00:04:45.267 22:24:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.267 22:24:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.267 22:24:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.267 22:24:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.267 22:24:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.267 22:24:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6828768 kB' 'MemUsed: 5410344 kB' 'SwapCached: 0 kB' 'Active: 497664 kB' 'Inactive: 2459456 kB' 'Active(anon): 128512 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459456 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 2839096 kB' 'Mapped: 50852 kB' 'AnonPages: 119640 kB' 'Shmem: 10488 kB' 'KernelStack: 6796 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85936 kB' 'Slab: 187656 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101720 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.267 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.267 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.268 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.268 22:24:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.268 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.268 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.268 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.268 22:24:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.268 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.268 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.268 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.268 22:24:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.268 22:24:45 -- setup/common.sh@32 -- # continue 00:04:45.268 22:24:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.268 22:24:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.268 22:24:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.268 22:24:45 -- setup/common.sh@33 -- # echo 0 00:04:45.268 22:24:45 -- setup/common.sh@33 -- # return 0 00:04:45.268 22:24:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.268 22:24:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.268 node0=1025 expecting 1025 00:04:45.268 22:24:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.268 22:24:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.268 22:24:45 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:45.268 22:24:45 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:45.268 00:04:45.268 real 0m0.615s 00:04:45.268 user 0m0.287s 00:04:45.268 sys 0m0.326s 00:04:45.268 22:24:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:45.268 22:24:45 -- common/autotest_common.sh@10 -- # set +x 00:04:45.268 ************************************ 00:04:45.268 END TEST odd_alloc 00:04:45.268 ************************************ 00:04:45.268 22:24:45 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:45.268 22:24:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.268 22:24:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.268 22:24:45 -- common/autotest_common.sh@10 -- # set +x 00:04:45.268 ************************************ 00:04:45.268 START TEST custom_alloc 00:04:45.268 ************************************ 00:04:45.268 22:24:45 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:45.268 22:24:45 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:45.268 22:24:45 -- setup/hugepages.sh@169 -- # local node 00:04:45.268 22:24:45 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:45.268 22:24:45 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:45.268 22:24:45 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:45.268 22:24:45 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:45.268 22:24:45 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:45.268 22:24:45 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:45.268 22:24:45 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:45.268 22:24:45 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:45.268 22:24:45 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:45.268 22:24:45 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:45.268 22:24:45 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:45.268 22:24:45 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:45.268 22:24:45 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:45.268 22:24:45 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:45.268 22:24:45 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:45.268 22:24:45 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:45.268 22:24:45 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:45.268 22:24:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:45.268 22:24:45 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:45.268 22:24:45 -- setup/hugepages.sh@83 -- # : 0 00:04:45.268 22:24:45 -- setup/hugepages.sh@84 -- # : 0 00:04:45.268 22:24:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:45.268 22:24:45 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:45.268 22:24:45 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:45.268 22:24:45 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:45.268 22:24:45 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:45.268 22:24:45 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:45.268 22:24:45 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:45.268 22:24:45 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:45.268 22:24:45 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:45.268 22:24:45 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:45.268 22:24:45 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:45.268 22:24:45 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:45.268 22:24:45 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:45.268 22:24:45 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:45.268 22:24:45 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:45.268 22:24:45 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:45.268 22:24:45 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:45.268 22:24:45 -- setup/hugepages.sh@78 -- # return 0 00:04:45.268 22:24:45 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:45.268 22:24:45 -- setup/hugepages.sh@187 -- # setup output 00:04:45.268 22:24:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.268 22:24:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:45.839 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.839 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:45.839 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:45.839 22:24:46 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:45.839 22:24:46 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:45.839 22:24:46 -- setup/hugepages.sh@89 -- # local node 00:04:45.839 22:24:46 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:45.839 22:24:46 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:45.839 22:24:46 -- setup/hugepages.sh@92 -- # local surp 00:04:45.839 22:24:46 -- setup/hugepages.sh@93 -- # local resv 00:04:45.839 22:24:46 -- setup/hugepages.sh@94 -- # local anon 00:04:45.839 22:24:46 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:45.839 22:24:46 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:45.839 22:24:46 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:45.839 22:24:46 -- setup/common.sh@18 -- # local node= 00:04:45.839 22:24:46 -- setup/common.sh@19 -- # local var val 00:04:45.839 22:24:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.839 22:24:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.839 22:24:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.839 22:24:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.839 22:24:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.839 22:24:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.839 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.839 22:24:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7882108 kB' 'MemAvailable: 10516736 kB' 'Buffers: 2684 kB' 'Cached: 2836412 kB' 'SwapCached: 0 kB' 'Active: 498060 kB' 'Inactive: 2459456 kB' 'Active(anon): 128908 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459456 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120024 kB' 'Mapped: 51056 kB' 'Shmem: 10488 kB' 'KReclaimable: 85936 kB' 'Slab: 187588 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101652 kB' 'KernelStack: 6852 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 312072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:45.839 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.839 22:24:46 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.839 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.839 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.839 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.839 22:24:46 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.839 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.839 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.839 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.839 22:24:46 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.839 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.839 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.839 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.839 22:24:46 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.839 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.839 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.839 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.839 22:24:46 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.839 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.839 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.839 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.839 22:24:46 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.839 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.839 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.839 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.839 22:24:46 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.839 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.839 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.839 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.839 22:24:46 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.839 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.839 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.839 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.839 22:24:46 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.839 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.839 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.839 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.840 22:24:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.840 22:24:46 -- setup/common.sh@33 -- # echo 0 00:04:45.840 22:24:46 -- setup/common.sh@33 -- # return 0 00:04:45.840 22:24:46 -- setup/hugepages.sh@97 -- # anon=0 00:04:45.840 22:24:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:45.840 22:24:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.840 22:24:46 -- setup/common.sh@18 -- # local node= 00:04:45.840 22:24:46 -- setup/common.sh@19 -- # local var val 00:04:45.840 22:24:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.840 22:24:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.840 22:24:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.840 22:24:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.840 22:24:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.840 22:24:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.840 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7882748 kB' 'MemAvailable: 10517376 kB' 'Buffers: 2684 kB' 'Cached: 2836412 kB' 'SwapCached: 0 kB' 'Active: 497380 kB' 'Inactive: 2459456 kB' 'Active(anon): 128228 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459456 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119384 kB' 'Mapped: 50956 kB' 'Shmem: 10488 kB' 'KReclaimable: 85936 kB' 'Slab: 187592 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101656 kB' 'KernelStack: 6816 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 312072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.841 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.841 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.842 22:24:46 -- setup/common.sh@33 -- # echo 0 00:04:45.842 22:24:46 -- setup/common.sh@33 -- # return 0 00:04:45.842 22:24:46 -- setup/hugepages.sh@99 -- # surp=0 00:04:45.842 22:24:46 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:45.842 22:24:46 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:45.842 22:24:46 -- setup/common.sh@18 -- # local node= 00:04:45.842 22:24:46 -- setup/common.sh@19 -- # local var val 00:04:45.842 22:24:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.842 22:24:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.842 22:24:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.842 22:24:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.842 22:24:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.842 22:24:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.842 22:24:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7882748 kB' 'MemAvailable: 10517376 kB' 'Buffers: 2684 kB' 'Cached: 2836412 kB' 'SwapCached: 0 kB' 'Active: 497584 kB' 'Inactive: 2459456 kB' 'Active(anon): 128432 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459456 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119580 kB' 'Mapped: 50972 kB' 'Shmem: 10488 kB' 'KReclaimable: 85936 kB' 'Slab: 187592 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101656 kB' 'KernelStack: 6800 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 312072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.842 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.842 22:24:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.843 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.843 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.844 22:24:46 -- setup/common.sh@33 -- # echo 0 00:04:45.844 22:24:46 -- setup/common.sh@33 -- # return 0 00:04:45.844 nr_hugepages=512 00:04:45.844 resv_hugepages=0 00:04:45.844 surplus_hugepages=0 00:04:45.844 anon_hugepages=0 00:04:45.844 22:24:46 -- setup/hugepages.sh@100 -- # resv=0 00:04:45.844 22:24:46 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:45.844 22:24:46 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:45.844 22:24:46 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:45.844 22:24:46 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:45.844 22:24:46 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:45.844 22:24:46 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:45.844 22:24:46 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:45.844 22:24:46 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:45.844 22:24:46 -- setup/common.sh@18 -- # local node= 00:04:45.844 22:24:46 -- setup/common.sh@19 -- # local var val 00:04:45.844 22:24:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.844 22:24:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.844 22:24:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.844 22:24:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.844 22:24:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.844 22:24:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.844 22:24:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7882748 kB' 'MemAvailable: 10517376 kB' 'Buffers: 2684 kB' 'Cached: 2836412 kB' 'SwapCached: 0 kB' 'Active: 497572 kB' 'Inactive: 2459456 kB' 'Active(anon): 128420 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459456 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119524 kB' 'Mapped: 50956 kB' 'Shmem: 10488 kB' 'KReclaimable: 85936 kB' 'Slab: 187588 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101652 kB' 'KernelStack: 6800 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 312072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.844 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.844 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.845 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.845 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.846 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.846 22:24:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.846 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.846 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.846 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.846 22:24:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.846 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.846 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.846 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.846 22:24:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.846 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.846 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.846 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.846 22:24:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.846 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.846 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.846 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.846 22:24:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.846 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.846 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.846 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.846 22:24:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.846 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.846 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.846 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.846 22:24:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.846 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.846 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.846 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.846 22:24:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.846 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.846 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.846 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.846 22:24:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.846 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.846 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.846 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.846 22:24:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.846 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.846 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.846 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.846 22:24:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.846 22:24:46 -- setup/common.sh@32 -- # continue 00:04:45.846 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.846 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.846 22:24:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.846 22:24:46 -- setup/common.sh@33 -- # echo 512 00:04:45.846 22:24:46 -- setup/common.sh@33 -- # return 0 00:04:45.846 22:24:46 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:45.846 22:24:46 -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.846 22:24:46 -- setup/hugepages.sh@27 -- # local node 00:04:45.846 22:24:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.846 22:24:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:45.846 22:24:46 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:45.846 22:24:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.846 22:24:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.846 22:24:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.846 22:24:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.846 22:24:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.846 22:24:46 -- setup/common.sh@18 -- # local node=0 00:04:45.846 22:24:46 -- setup/common.sh@19 -- # local var val 00:04:45.846 22:24:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.846 22:24:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.846 22:24:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.846 22:24:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.846 22:24:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.846 22:24:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.106 22:24:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7883128 kB' 'MemUsed: 4355984 kB' 'SwapCached: 0 kB' 'Active: 497572 kB' 'Inactive: 2459456 kB' 'Active(anon): 128420 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459456 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2839096 kB' 'Mapped: 50956 kB' 'AnonPages: 119524 kB' 'Shmem: 10488 kB' 'KernelStack: 6800 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85936 kB' 'Slab: 187584 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101648 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.106 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.106 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.107 22:24:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.107 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.107 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.107 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.107 22:24:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.107 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.107 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.107 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.107 22:24:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.107 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.107 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.107 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.107 22:24:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.107 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.107 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.107 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.107 22:24:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.107 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.107 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.107 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.107 22:24:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.107 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.107 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.107 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.107 22:24:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.107 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.107 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.107 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.107 22:24:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.107 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.107 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.107 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.107 22:24:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.107 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.107 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.107 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.107 22:24:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.107 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.107 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.107 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.107 22:24:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.107 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.107 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.107 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.107 22:24:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.107 22:24:46 -- setup/common.sh@32 -- # continue 00:04:46.107 22:24:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.107 22:24:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.107 22:24:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.107 22:24:46 -- setup/common.sh@33 -- # echo 0 00:04:46.107 22:24:46 -- setup/common.sh@33 -- # return 0 00:04:46.107 node0=512 expecting 512 00:04:46.107 22:24:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.107 22:24:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.107 22:24:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.107 22:24:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.107 22:24:46 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:46.107 22:24:46 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:46.107 00:04:46.107 real 0m0.647s 00:04:46.107 user 0m0.299s 00:04:46.107 sys 0m0.347s 00:04:46.107 22:24:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:46.107 ************************************ 00:04:46.107 END TEST custom_alloc 00:04:46.107 ************************************ 00:04:46.107 22:24:46 -- common/autotest_common.sh@10 -- # set +x 00:04:46.107 22:24:46 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:46.107 22:24:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.107 22:24:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.107 22:24:46 -- common/autotest_common.sh@10 -- # set +x 00:04:46.107 ************************************ 00:04:46.107 START TEST no_shrink_alloc 00:04:46.107 ************************************ 00:04:46.107 22:24:46 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:46.107 22:24:46 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:46.107 22:24:46 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:46.107 22:24:46 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:46.107 22:24:46 -- setup/hugepages.sh@51 -- # shift 00:04:46.107 22:24:46 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:46.107 22:24:46 -- setup/hugepages.sh@52 -- # local node_ids 00:04:46.107 22:24:46 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:46.107 22:24:46 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:46.107 22:24:46 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:46.107 22:24:46 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:46.107 22:24:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.107 22:24:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:46.107 22:24:46 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:46.107 22:24:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.107 22:24:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.107 22:24:46 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:46.107 22:24:46 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:46.107 22:24:46 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:46.107 22:24:46 -- setup/hugepages.sh@73 -- # return 0 00:04:46.107 22:24:46 -- setup/hugepages.sh@198 -- # setup output 00:04:46.107 22:24:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.107 22:24:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:46.367 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.367 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.367 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.367 22:24:47 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:46.367 22:24:47 -- setup/hugepages.sh@89 -- # local node 00:04:46.367 22:24:47 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.367 22:24:47 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.367 22:24:47 -- setup/hugepages.sh@92 -- # local surp 00:04:46.367 22:24:47 -- setup/hugepages.sh@93 -- # local resv 00:04:46.367 22:24:47 -- setup/hugepages.sh@94 -- # local anon 00:04:46.367 22:24:47 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.367 22:24:47 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.367 22:24:47 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.367 22:24:47 -- setup/common.sh@18 -- # local node= 00:04:46.367 22:24:47 -- setup/common.sh@19 -- # local var val 00:04:46.367 22:24:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.367 22:24:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.367 22:24:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.367 22:24:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.367 22:24:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.367 22:24:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.367 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 22:24:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6836780 kB' 'MemAvailable: 9471416 kB' 'Buffers: 2684 kB' 'Cached: 2836420 kB' 'SwapCached: 0 kB' 'Active: 497836 kB' 'Inactive: 2459464 kB' 'Active(anon): 128684 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119772 kB' 'Mapped: 51020 kB' 'Shmem: 10488 kB' 'KReclaimable: 85936 kB' 'Slab: 187600 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101664 kB' 'KernelStack: 6796 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 312272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:46.367 22:24:47 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.367 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.367 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 22:24:47 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.367 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.367 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 22:24:47 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.367 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.367 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.631 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.631 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.632 22:24:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.632 22:24:47 -- setup/common.sh@33 -- # echo 0 00:04:46.632 22:24:47 -- setup/common.sh@33 -- # return 0 00:04:46.632 22:24:47 -- setup/hugepages.sh@97 -- # anon=0 00:04:46.632 22:24:47 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.632 22:24:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.632 22:24:47 -- setup/common.sh@18 -- # local node= 00:04:46.632 22:24:47 -- setup/common.sh@19 -- # local var val 00:04:46.632 22:24:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.632 22:24:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.632 22:24:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.632 22:24:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.632 22:24:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.632 22:24:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.632 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6844600 kB' 'MemAvailable: 9479236 kB' 'Buffers: 2684 kB' 'Cached: 2836420 kB' 'SwapCached: 0 kB' 'Active: 495304 kB' 'Inactive: 2459464 kB' 'Active(anon): 126152 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117272 kB' 'Mapped: 50364 kB' 'Shmem: 10488 kB' 'KReclaimable: 85936 kB' 'Slab: 187552 kB' 'SReclaimable: 85936 kB' 'SUnreclaim: 101616 kB' 'KernelStack: 6732 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 295936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.633 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.633 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.634 22:24:47 -- setup/common.sh@33 -- # echo 0 00:04:46.634 22:24:47 -- setup/common.sh@33 -- # return 0 00:04:46.634 22:24:47 -- setup/hugepages.sh@99 -- # surp=0 00:04:46.634 22:24:47 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.634 22:24:47 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.634 22:24:47 -- setup/common.sh@18 -- # local node= 00:04:46.634 22:24:47 -- setup/common.sh@19 -- # local var val 00:04:46.634 22:24:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.634 22:24:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.634 22:24:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.634 22:24:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.634 22:24:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.634 22:24:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6844604 kB' 'MemAvailable: 9479236 kB' 'Buffers: 2684 kB' 'Cached: 2836420 kB' 'SwapCached: 0 kB' 'Active: 495068 kB' 'Inactive: 2459464 kB' 'Active(anon): 125916 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117036 kB' 'Mapped: 50172 kB' 'Shmem: 10488 kB' 'KReclaimable: 85928 kB' 'Slab: 187432 kB' 'SReclaimable: 85928 kB' 'SUnreclaim: 101504 kB' 'KernelStack: 6700 kB' 'PageTables: 3888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 295936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.634 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.634 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.635 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.635 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.636 22:24:47 -- setup/common.sh@33 -- # echo 0 00:04:46.636 22:24:47 -- setup/common.sh@33 -- # return 0 00:04:46.636 nr_hugepages=1024 00:04:46.636 resv_hugepages=0 00:04:46.636 surplus_hugepages=0 00:04:46.636 anon_hugepages=0 00:04:46.636 22:24:47 -- setup/hugepages.sh@100 -- # resv=0 00:04:46.636 22:24:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:46.636 22:24:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.636 22:24:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.636 22:24:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.636 22:24:47 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.636 22:24:47 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:46.636 22:24:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.636 22:24:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.636 22:24:47 -- setup/common.sh@18 -- # local node= 00:04:46.636 22:24:47 -- setup/common.sh@19 -- # local var val 00:04:46.636 22:24:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.636 22:24:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.636 22:24:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.636 22:24:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.636 22:24:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.636 22:24:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.636 22:24:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6844604 kB' 'MemAvailable: 9479236 kB' 'Buffers: 2684 kB' 'Cached: 2836420 kB' 'SwapCached: 0 kB' 'Active: 495220 kB' 'Inactive: 2459464 kB' 'Active(anon): 126068 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116980 kB' 'Mapped: 50172 kB' 'Shmem: 10488 kB' 'KReclaimable: 85928 kB' 'Slab: 187432 kB' 'SReclaimable: 85928 kB' 'SUnreclaim: 101504 kB' 'KernelStack: 6716 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 295936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.636 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.636 22:24:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.637 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.637 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.638 22:24:47 -- setup/common.sh@33 -- # echo 1024 00:04:46.638 22:24:47 -- setup/common.sh@33 -- # return 0 00:04:46.638 22:24:47 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.638 22:24:47 -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.638 22:24:47 -- setup/hugepages.sh@27 -- # local node 00:04:46.638 22:24:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.638 22:24:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:46.638 22:24:47 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:46.638 22:24:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.638 22:24:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.638 22:24:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.638 22:24:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.638 22:24:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.638 22:24:47 -- setup/common.sh@18 -- # local node=0 00:04:46.638 22:24:47 -- setup/common.sh@19 -- # local var val 00:04:46.638 22:24:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.638 22:24:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.638 22:24:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.638 22:24:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.638 22:24:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.638 22:24:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6844604 kB' 'MemUsed: 5394508 kB' 'SwapCached: 0 kB' 'Active: 495140 kB' 'Inactive: 2459464 kB' 'Active(anon): 125988 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2839104 kB' 'Mapped: 50172 kB' 'AnonPages: 117144 kB' 'Shmem: 10488 kB' 'KernelStack: 6716 kB' 'PageTables: 3944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85928 kB' 'Slab: 187432 kB' 'SReclaimable: 85928 kB' 'SUnreclaim: 101504 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.638 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.638 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # continue 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.639 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.639 22:24:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.639 22:24:47 -- setup/common.sh@33 -- # echo 0 00:04:46.639 22:24:47 -- setup/common.sh@33 -- # return 0 00:04:46.639 22:24:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.639 22:24:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.639 22:24:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.639 node0=1024 expecting 1024 00:04:46.639 22:24:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.639 22:24:47 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:46.639 22:24:47 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:46.639 22:24:47 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:46.639 22:24:47 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:46.639 22:24:47 -- setup/hugepages.sh@202 -- # setup output 00:04:46.639 22:24:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.639 22:24:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:47.211 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.211 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.211 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.211 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:47.211 22:24:47 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:47.211 22:24:47 -- setup/hugepages.sh@89 -- # local node 00:04:47.211 22:24:47 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.211 22:24:47 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.211 22:24:47 -- setup/hugepages.sh@92 -- # local surp 00:04:47.211 22:24:47 -- setup/hugepages.sh@93 -- # local resv 00:04:47.211 22:24:47 -- setup/hugepages.sh@94 -- # local anon 00:04:47.211 22:24:47 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.211 22:24:47 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.211 22:24:47 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.211 22:24:47 -- setup/common.sh@18 -- # local node= 00:04:47.211 22:24:47 -- setup/common.sh@19 -- # local var val 00:04:47.211 22:24:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.211 22:24:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.211 22:24:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.211 22:24:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.211 22:24:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.211 22:24:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.212 22:24:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6845272 kB' 'MemAvailable: 9479900 kB' 'Buffers: 2684 kB' 'Cached: 2836416 kB' 'SwapCached: 0 kB' 'Active: 495452 kB' 'Inactive: 2459460 kB' 'Active(anon): 126300 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459460 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117480 kB' 'Mapped: 50244 kB' 'Shmem: 10488 kB' 'KReclaimable: 85928 kB' 'Slab: 187328 kB' 'SReclaimable: 85928 kB' 'SUnreclaim: 101400 kB' 'KernelStack: 6728 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 295940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.212 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.212 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.213 22:24:47 -- setup/common.sh@33 -- # echo 0 00:04:47.213 22:24:47 -- setup/common.sh@33 -- # return 0 00:04:47.213 22:24:47 -- setup/hugepages.sh@97 -- # anon=0 00:04:47.213 22:24:47 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.213 22:24:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.213 22:24:47 -- setup/common.sh@18 -- # local node= 00:04:47.213 22:24:47 -- setup/common.sh@19 -- # local var val 00:04:47.213 22:24:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.213 22:24:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.213 22:24:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.213 22:24:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.213 22:24:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.213 22:24:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6845272 kB' 'MemAvailable: 9479900 kB' 'Buffers: 2684 kB' 'Cached: 2836416 kB' 'SwapCached: 0 kB' 'Active: 495028 kB' 'Inactive: 2459460 kB' 'Active(anon): 125876 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459460 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117288 kB' 'Mapped: 50056 kB' 'Shmem: 10488 kB' 'KReclaimable: 85928 kB' 'Slab: 187316 kB' 'SReclaimable: 85928 kB' 'SUnreclaim: 101388 kB' 'KernelStack: 6688 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 295936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.213 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.213 22:24:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.214 22:24:47 -- setup/common.sh@33 -- # echo 0 00:04:47.214 22:24:47 -- setup/common.sh@33 -- # return 0 00:04:47.214 22:24:47 -- setup/hugepages.sh@99 -- # surp=0 00:04:47.214 22:24:47 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:47.214 22:24:47 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.214 22:24:47 -- setup/common.sh@18 -- # local node= 00:04:47.214 22:24:47 -- setup/common.sh@19 -- # local var val 00:04:47.214 22:24:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.214 22:24:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.214 22:24:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.214 22:24:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.214 22:24:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.214 22:24:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6845272 kB' 'MemAvailable: 9479900 kB' 'Buffers: 2684 kB' 'Cached: 2836416 kB' 'SwapCached: 0 kB' 'Active: 494876 kB' 'Inactive: 2459460 kB' 'Active(anon): 125724 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459460 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116904 kB' 'Mapped: 50056 kB' 'Shmem: 10488 kB' 'KReclaimable: 85928 kB' 'Slab: 187312 kB' 'SReclaimable: 85928 kB' 'SUnreclaim: 101384 kB' 'KernelStack: 6704 kB' 'PageTables: 3888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 295936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.214 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.214 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.215 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.215 22:24:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.215 22:24:47 -- setup/common.sh@33 -- # echo 0 00:04:47.215 22:24:47 -- setup/common.sh@33 -- # return 0 00:04:47.215 22:24:47 -- setup/hugepages.sh@100 -- # resv=0 00:04:47.215 nr_hugepages=1024 00:04:47.215 22:24:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:47.215 resv_hugepages=0 00:04:47.215 22:24:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:47.215 surplus_hugepages=0 00:04:47.215 22:24:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:47.215 anon_hugepages=0 00:04:47.215 22:24:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:47.215 22:24:47 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.215 22:24:47 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:47.215 22:24:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:47.215 22:24:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:47.215 22:24:47 -- setup/common.sh@18 -- # local node= 00:04:47.215 22:24:47 -- setup/common.sh@19 -- # local var val 00:04:47.216 22:24:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.216 22:24:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.216 22:24:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.216 22:24:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.216 22:24:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.216 22:24:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6845272 kB' 'MemAvailable: 9479900 kB' 'Buffers: 2684 kB' 'Cached: 2836416 kB' 'SwapCached: 0 kB' 'Active: 494836 kB' 'Inactive: 2459460 kB' 'Active(anon): 125684 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459460 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116864 kB' 'Mapped: 50056 kB' 'Shmem: 10488 kB' 'KReclaimable: 85928 kB' 'Slab: 187312 kB' 'SReclaimable: 85928 kB' 'SUnreclaim: 101384 kB' 'KernelStack: 6688 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 295936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.216 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.216 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.217 22:24:47 -- setup/common.sh@33 -- # echo 1024 00:04:47.217 22:24:47 -- setup/common.sh@33 -- # return 0 00:04:47.217 22:24:47 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.217 22:24:47 -- setup/hugepages.sh@112 -- # get_nodes 00:04:47.217 22:24:47 -- setup/hugepages.sh@27 -- # local node 00:04:47.217 22:24:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.217 22:24:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:47.217 22:24:47 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:47.217 22:24:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.217 22:24:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.217 22:24:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.217 22:24:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:47.217 22:24:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.217 22:24:47 -- setup/common.sh@18 -- # local node=0 00:04:47.217 22:24:47 -- setup/common.sh@19 -- # local var val 00:04:47.217 22:24:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.217 22:24:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.217 22:24:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:47.217 22:24:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:47.217 22:24:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.217 22:24:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6845568 kB' 'MemUsed: 5393544 kB' 'SwapCached: 0 kB' 'Active: 494800 kB' 'Inactive: 2459460 kB' 'Active(anon): 125648 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459460 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2839100 kB' 'Mapped: 50056 kB' 'AnonPages: 116828 kB' 'Shmem: 10488 kB' 'KernelStack: 6672 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85928 kB' 'Slab: 187312 kB' 'SReclaimable: 85928 kB' 'SUnreclaim: 101384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.217 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.217 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # continue 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.218 22:24:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.218 22:24:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.218 22:24:47 -- setup/common.sh@33 -- # echo 0 00:04:47.218 22:24:47 -- setup/common.sh@33 -- # return 0 00:04:47.218 22:24:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.218 22:24:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.218 22:24:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.218 22:24:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.218 node0=1024 expecting 1024 00:04:47.218 22:24:47 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:47.218 22:24:47 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:47.218 00:04:47.218 real 0m1.218s 00:04:47.218 user 0m0.578s 00:04:47.218 sys 0m0.644s 00:04:47.218 22:24:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:47.218 22:24:47 -- common/autotest_common.sh@10 -- # set +x 00:04:47.218 ************************************ 00:04:47.218 END TEST no_shrink_alloc 00:04:47.218 ************************************ 00:04:47.218 22:24:47 -- setup/hugepages.sh@217 -- # clear_hp 00:04:47.218 22:24:47 -- setup/hugepages.sh@37 -- # local node hp 00:04:47.218 22:24:47 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:47.218 22:24:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:47.218 22:24:47 -- setup/hugepages.sh@41 -- # echo 0 00:04:47.218 22:24:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:47.218 22:24:47 -- setup/hugepages.sh@41 -- # echo 0 00:04:47.218 22:24:47 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:47.218 22:24:47 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:47.218 00:04:47.218 real 0m5.342s 00:04:47.218 user 0m2.485s 00:04:47.218 sys 0m2.788s 00:04:47.218 22:24:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:47.218 22:24:47 -- common/autotest_common.sh@10 -- # set +x 00:04:47.218 ************************************ 00:04:47.218 END TEST hugepages 00:04:47.218 ************************************ 00:04:47.477 22:24:47 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:47.477 22:24:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.477 22:24:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.477 22:24:47 -- common/autotest_common.sh@10 -- # set +x 00:04:47.477 ************************************ 00:04:47.477 START TEST driver 00:04:47.477 ************************************ 00:04:47.477 22:24:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:47.477 * Looking for test storage... 00:04:47.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:47.477 22:24:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:47.477 22:24:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:47.477 22:24:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:47.477 22:24:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:47.477 22:24:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:47.477 22:24:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:47.477 22:24:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:47.477 22:24:48 -- scripts/common.sh@335 -- # IFS=.-: 00:04:47.477 22:24:48 -- scripts/common.sh@335 -- # read -ra ver1 00:04:47.477 22:24:48 -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.477 22:24:48 -- scripts/common.sh@336 -- # read -ra ver2 00:04:47.477 22:24:48 -- scripts/common.sh@337 -- # local 'op=<' 00:04:47.477 22:24:48 -- scripts/common.sh@339 -- # ver1_l=2 00:04:47.477 22:24:48 -- scripts/common.sh@340 -- # ver2_l=1 00:04:47.477 22:24:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:47.477 22:24:48 -- scripts/common.sh@343 -- # case "$op" in 00:04:47.477 22:24:48 -- scripts/common.sh@344 -- # : 1 00:04:47.477 22:24:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:47.477 22:24:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.477 22:24:48 -- scripts/common.sh@364 -- # decimal 1 00:04:47.477 22:24:48 -- scripts/common.sh@352 -- # local d=1 00:04:47.477 22:24:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.477 22:24:48 -- scripts/common.sh@354 -- # echo 1 00:04:47.477 22:24:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:47.477 22:24:48 -- scripts/common.sh@365 -- # decimal 2 00:04:47.477 22:24:48 -- scripts/common.sh@352 -- # local d=2 00:04:47.477 22:24:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.477 22:24:48 -- scripts/common.sh@354 -- # echo 2 00:04:47.477 22:24:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:47.477 22:24:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:47.477 22:24:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:47.477 22:24:48 -- scripts/common.sh@367 -- # return 0 00:04:47.477 22:24:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.477 22:24:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:47.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.477 --rc genhtml_branch_coverage=1 00:04:47.477 --rc genhtml_function_coverage=1 00:04:47.477 --rc genhtml_legend=1 00:04:47.477 --rc geninfo_all_blocks=1 00:04:47.477 --rc geninfo_unexecuted_blocks=1 00:04:47.477 00:04:47.477 ' 00:04:47.477 22:24:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:47.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.477 --rc genhtml_branch_coverage=1 00:04:47.477 --rc genhtml_function_coverage=1 00:04:47.477 --rc genhtml_legend=1 00:04:47.477 --rc geninfo_all_blocks=1 00:04:47.477 --rc geninfo_unexecuted_blocks=1 00:04:47.477 00:04:47.477 ' 00:04:47.477 22:24:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:47.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.477 --rc genhtml_branch_coverage=1 00:04:47.477 --rc genhtml_function_coverage=1 00:04:47.477 --rc genhtml_legend=1 00:04:47.477 --rc geninfo_all_blocks=1 00:04:47.477 --rc geninfo_unexecuted_blocks=1 00:04:47.477 00:04:47.477 ' 00:04:47.477 22:24:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:47.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.477 --rc genhtml_branch_coverage=1 00:04:47.477 --rc genhtml_function_coverage=1 00:04:47.477 --rc genhtml_legend=1 00:04:47.477 --rc geninfo_all_blocks=1 00:04:47.477 --rc geninfo_unexecuted_blocks=1 00:04:47.477 00:04:47.477 ' 00:04:47.477 22:24:48 -- setup/driver.sh@68 -- # setup reset 00:04:47.477 22:24:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:47.477 22:24:48 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:48.413 22:24:48 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:48.413 22:24:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.413 22:24:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.413 22:24:48 -- common/autotest_common.sh@10 -- # set +x 00:04:48.413 ************************************ 00:04:48.413 START TEST guess_driver 00:04:48.413 ************************************ 00:04:48.413 22:24:48 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:48.413 22:24:48 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:48.413 22:24:48 -- setup/driver.sh@47 -- # local fail=0 00:04:48.413 22:24:48 -- setup/driver.sh@49 -- # pick_driver 00:04:48.413 22:24:48 -- setup/driver.sh@36 -- # vfio 00:04:48.413 22:24:48 -- setup/driver.sh@21 -- # local iommu_grups 00:04:48.413 22:24:48 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:48.413 22:24:48 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:48.413 22:24:48 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:48.413 22:24:48 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:48.413 22:24:48 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:48.413 22:24:48 -- setup/driver.sh@32 -- # return 1 00:04:48.413 22:24:48 -- setup/driver.sh@38 -- # uio 00:04:48.413 22:24:48 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:48.413 22:24:48 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:48.413 22:24:48 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:48.413 22:24:48 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:48.413 22:24:48 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:48.413 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:48.413 22:24:48 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:48.413 22:24:48 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:48.413 22:24:48 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:48.413 22:24:48 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:48.413 Looking for driver=uio_pci_generic 00:04:48.413 22:24:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.413 22:24:48 -- setup/driver.sh@45 -- # setup output config 00:04:48.413 22:24:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.413 22:24:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:48.980 22:24:49 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:48.980 22:24:49 -- setup/driver.sh@58 -- # continue 00:04:48.980 22:24:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.980 22:24:49 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.980 22:24:49 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:48.980 22:24:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.980 22:24:49 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.980 22:24:49 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:48.980 22:24:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.240 22:24:49 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:49.240 22:24:49 -- setup/driver.sh@65 -- # setup reset 00:04:49.240 22:24:49 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:49.240 22:24:49 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:49.808 00:04:49.808 real 0m1.562s 00:04:49.808 user 0m0.589s 00:04:49.808 sys 0m0.974s 00:04:49.808 22:24:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:49.808 22:24:50 -- common/autotest_common.sh@10 -- # set +x 00:04:49.808 ************************************ 00:04:49.808 END TEST guess_driver 00:04:49.808 ************************************ 00:04:49.808 ************************************ 00:04:49.808 END TEST driver 00:04:49.808 ************************************ 00:04:49.808 00:04:49.808 real 0m2.415s 00:04:49.808 user 0m0.923s 00:04:49.808 sys 0m1.563s 00:04:49.808 22:24:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:49.808 22:24:50 -- common/autotest_common.sh@10 -- # set +x 00:04:49.808 22:24:50 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:49.808 22:24:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.808 22:24:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.808 22:24:50 -- common/autotest_common.sh@10 -- # set +x 00:04:49.808 ************************************ 00:04:49.808 START TEST devices 00:04:49.808 ************************************ 00:04:49.808 22:24:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:49.808 * Looking for test storage... 00:04:49.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:49.808 22:24:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:49.808 22:24:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:49.808 22:24:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:50.067 22:24:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:50.067 22:24:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:50.067 22:24:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:50.067 22:24:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:50.067 22:24:50 -- scripts/common.sh@335 -- # IFS=.-: 00:04:50.067 22:24:50 -- scripts/common.sh@335 -- # read -ra ver1 00:04:50.067 22:24:50 -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.067 22:24:50 -- scripts/common.sh@336 -- # read -ra ver2 00:04:50.067 22:24:50 -- scripts/common.sh@337 -- # local 'op=<' 00:04:50.067 22:24:50 -- scripts/common.sh@339 -- # ver1_l=2 00:04:50.067 22:24:50 -- scripts/common.sh@340 -- # ver2_l=1 00:04:50.067 22:24:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:50.067 22:24:50 -- scripts/common.sh@343 -- # case "$op" in 00:04:50.067 22:24:50 -- scripts/common.sh@344 -- # : 1 00:04:50.067 22:24:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:50.067 22:24:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.067 22:24:50 -- scripts/common.sh@364 -- # decimal 1 00:04:50.067 22:24:50 -- scripts/common.sh@352 -- # local d=1 00:04:50.067 22:24:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.067 22:24:50 -- scripts/common.sh@354 -- # echo 1 00:04:50.067 22:24:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:50.067 22:24:50 -- scripts/common.sh@365 -- # decimal 2 00:04:50.067 22:24:50 -- scripts/common.sh@352 -- # local d=2 00:04:50.067 22:24:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.067 22:24:50 -- scripts/common.sh@354 -- # echo 2 00:04:50.067 22:24:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:50.067 22:24:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:50.067 22:24:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:50.067 22:24:50 -- scripts/common.sh@367 -- # return 0 00:04:50.067 22:24:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.067 22:24:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:50.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.067 --rc genhtml_branch_coverage=1 00:04:50.067 --rc genhtml_function_coverage=1 00:04:50.067 --rc genhtml_legend=1 00:04:50.067 --rc geninfo_all_blocks=1 00:04:50.067 --rc geninfo_unexecuted_blocks=1 00:04:50.067 00:04:50.067 ' 00:04:50.067 22:24:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:50.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.067 --rc genhtml_branch_coverage=1 00:04:50.067 --rc genhtml_function_coverage=1 00:04:50.067 --rc genhtml_legend=1 00:04:50.067 --rc geninfo_all_blocks=1 00:04:50.067 --rc geninfo_unexecuted_blocks=1 00:04:50.067 00:04:50.067 ' 00:04:50.067 22:24:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:50.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.067 --rc genhtml_branch_coverage=1 00:04:50.067 --rc genhtml_function_coverage=1 00:04:50.067 --rc genhtml_legend=1 00:04:50.067 --rc geninfo_all_blocks=1 00:04:50.067 --rc geninfo_unexecuted_blocks=1 00:04:50.067 00:04:50.067 ' 00:04:50.067 22:24:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:50.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.067 --rc genhtml_branch_coverage=1 00:04:50.067 --rc genhtml_function_coverage=1 00:04:50.067 --rc genhtml_legend=1 00:04:50.067 --rc geninfo_all_blocks=1 00:04:50.067 --rc geninfo_unexecuted_blocks=1 00:04:50.067 00:04:50.067 ' 00:04:50.067 22:24:50 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:50.067 22:24:50 -- setup/devices.sh@192 -- # setup reset 00:04:50.067 22:24:50 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:50.067 22:24:50 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:51.003 22:24:51 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:51.003 22:24:51 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:51.003 22:24:51 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:51.003 22:24:51 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:51.003 22:24:51 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:51.003 22:24:51 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:51.003 22:24:51 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:51.003 22:24:51 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:51.003 22:24:51 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:51.003 22:24:51 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:51.003 22:24:51 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:51.003 22:24:51 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:51.003 22:24:51 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:51.003 22:24:51 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:51.003 22:24:51 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:51.003 22:24:51 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:51.003 22:24:51 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:51.003 22:24:51 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:51.003 22:24:51 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:51.003 22:24:51 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:51.003 22:24:51 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:51.003 22:24:51 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:51.003 22:24:51 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:51.003 22:24:51 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:51.003 22:24:51 -- setup/devices.sh@196 -- # blocks=() 00:04:51.003 22:24:51 -- setup/devices.sh@196 -- # declare -a blocks 00:04:51.003 22:24:51 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:51.003 22:24:51 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:51.003 22:24:51 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:51.003 22:24:51 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:51.003 22:24:51 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:51.003 22:24:51 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:51.003 22:24:51 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:51.003 22:24:51 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:51.003 22:24:51 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:51.003 22:24:51 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:51.003 22:24:51 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:51.003 No valid GPT data, bailing 00:04:51.003 22:24:51 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:51.003 22:24:51 -- scripts/common.sh@393 -- # pt= 00:04:51.003 22:24:51 -- scripts/common.sh@394 -- # return 1 00:04:51.003 22:24:51 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:51.003 22:24:51 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:51.003 22:24:51 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:51.003 22:24:51 -- setup/common.sh@80 -- # echo 5368709120 00:04:51.003 22:24:51 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:51.003 22:24:51 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:51.003 22:24:51 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:51.003 22:24:51 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:51.003 22:24:51 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:51.003 22:24:51 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:51.003 22:24:51 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:51.003 22:24:51 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:51.003 22:24:51 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:51.003 22:24:51 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:51.003 22:24:51 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:51.003 No valid GPT data, bailing 00:04:51.003 22:24:51 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:51.003 22:24:51 -- scripts/common.sh@393 -- # pt= 00:04:51.003 22:24:51 -- scripts/common.sh@394 -- # return 1 00:04:51.003 22:24:51 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:51.003 22:24:51 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:51.003 22:24:51 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:51.003 22:24:51 -- setup/common.sh@80 -- # echo 4294967296 00:04:51.003 22:24:51 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:51.003 22:24:51 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:51.003 22:24:51 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:51.003 22:24:51 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:51.003 22:24:51 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:51.003 22:24:51 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:51.003 22:24:51 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:51.003 22:24:51 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:51.003 22:24:51 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:51.003 22:24:51 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:51.003 22:24:51 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:51.003 No valid GPT data, bailing 00:04:51.003 22:24:51 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:51.003 22:24:51 -- scripts/common.sh@393 -- # pt= 00:04:51.003 22:24:51 -- scripts/common.sh@394 -- # return 1 00:04:51.003 22:24:51 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:51.003 22:24:51 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:51.003 22:24:51 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:51.003 22:24:51 -- setup/common.sh@80 -- # echo 4294967296 00:04:51.003 22:24:51 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:51.003 22:24:51 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:51.003 22:24:51 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:51.003 22:24:51 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:51.003 22:24:51 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:51.003 22:24:51 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:51.003 22:24:51 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:51.003 22:24:51 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:51.003 22:24:51 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:51.003 22:24:51 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:51.003 22:24:51 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:51.003 No valid GPT data, bailing 00:04:51.003 22:24:51 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:51.003 22:24:51 -- scripts/common.sh@393 -- # pt= 00:04:51.003 22:24:51 -- scripts/common.sh@394 -- # return 1 00:04:51.003 22:24:51 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:51.003 22:24:51 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:51.003 22:24:51 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:51.003 22:24:51 -- setup/common.sh@80 -- # echo 4294967296 00:04:51.003 22:24:51 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:51.003 22:24:51 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:51.003 22:24:51 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:51.003 22:24:51 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:51.003 22:24:51 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:51.003 22:24:51 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:51.003 22:24:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.262 22:24:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.262 22:24:51 -- common/autotest_common.sh@10 -- # set +x 00:04:51.262 ************************************ 00:04:51.262 START TEST nvme_mount 00:04:51.262 ************************************ 00:04:51.262 22:24:51 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:51.262 22:24:51 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:51.262 22:24:51 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:51.262 22:24:51 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:51.262 22:24:51 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:51.262 22:24:51 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:51.262 22:24:51 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:51.262 22:24:51 -- setup/common.sh@40 -- # local part_no=1 00:04:51.262 22:24:51 -- setup/common.sh@41 -- # local size=1073741824 00:04:51.262 22:24:51 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:51.262 22:24:51 -- setup/common.sh@44 -- # parts=() 00:04:51.262 22:24:51 -- setup/common.sh@44 -- # local parts 00:04:51.262 22:24:51 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:51.262 22:24:51 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:51.262 22:24:51 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:51.262 22:24:51 -- setup/common.sh@46 -- # (( part++ )) 00:04:51.262 22:24:51 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:51.262 22:24:51 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:51.262 22:24:51 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:51.262 22:24:51 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:52.197 Creating new GPT entries in memory. 00:04:52.197 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:52.197 other utilities. 00:04:52.197 22:24:52 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:52.197 22:24:52 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:52.197 22:24:52 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:52.197 22:24:52 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:52.197 22:24:52 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:53.132 Creating new GPT entries in memory. 00:04:53.132 The operation has completed successfully. 00:04:53.132 22:24:53 -- setup/common.sh@57 -- # (( part++ )) 00:04:53.132 22:24:53 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:53.132 22:24:53 -- setup/common.sh@62 -- # wait 65567 00:04:53.132 22:24:53 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.132 22:24:53 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:53.132 22:24:53 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.132 22:24:53 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:53.132 22:24:53 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:53.132 22:24:53 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.391 22:24:53 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:53.391 22:24:53 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:53.391 22:24:53 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:53.391 22:24:53 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.391 22:24:53 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:53.391 22:24:53 -- setup/devices.sh@53 -- # local found=0 00:04:53.391 22:24:53 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:53.391 22:24:53 -- setup/devices.sh@56 -- # : 00:04:53.391 22:24:53 -- setup/devices.sh@59 -- # local pci status 00:04:53.391 22:24:53 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:53.391 22:24:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.391 22:24:53 -- setup/devices.sh@47 -- # setup output config 00:04:53.391 22:24:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.391 22:24:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:53.391 22:24:54 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:53.391 22:24:54 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:53.391 22:24:54 -- setup/devices.sh@63 -- # found=1 00:04:53.391 22:24:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.391 22:24:54 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:53.391 22:24:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.958 22:24:54 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:53.958 22:24:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.958 22:24:54 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:53.958 22:24:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.958 22:24:54 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:53.958 22:24:54 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:53.958 22:24:54 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.958 22:24:54 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:53.958 22:24:54 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:53.958 22:24:54 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:53.958 22:24:54 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.958 22:24:54 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.958 22:24:54 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:53.958 22:24:54 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:53.958 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:53.958 22:24:54 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:53.958 22:24:54 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:54.217 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:54.217 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:54.217 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:54.217 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:54.217 22:24:54 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:54.217 22:24:54 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:54.217 22:24:54 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.217 22:24:54 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:54.217 22:24:54 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:54.217 22:24:54 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.217 22:24:54 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:54.217 22:24:54 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:54.217 22:24:54 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:54.217 22:24:54 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.217 22:24:54 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:54.217 22:24:54 -- setup/devices.sh@53 -- # local found=0 00:04:54.217 22:24:54 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:54.217 22:24:54 -- setup/devices.sh@56 -- # : 00:04:54.217 22:24:54 -- setup/devices.sh@59 -- # local pci status 00:04:54.217 22:24:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.217 22:24:54 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:54.217 22:24:54 -- setup/devices.sh@47 -- # setup output config 00:04:54.217 22:24:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.217 22:24:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:54.476 22:24:55 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:54.476 22:24:55 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:54.476 22:24:55 -- setup/devices.sh@63 -- # found=1 00:04:54.476 22:24:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.476 22:24:55 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:54.476 22:24:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.734 22:24:55 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:54.734 22:24:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.993 22:24:55 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:54.993 22:24:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.993 22:24:55 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:54.993 22:24:55 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:54.993 22:24:55 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.993 22:24:55 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:54.993 22:24:55 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:54.993 22:24:55 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.993 22:24:55 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:54.993 22:24:55 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:54.993 22:24:55 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:54.993 22:24:55 -- setup/devices.sh@50 -- # local mount_point= 00:04:54.993 22:24:55 -- setup/devices.sh@51 -- # local test_file= 00:04:54.993 22:24:55 -- setup/devices.sh@53 -- # local found=0 00:04:54.993 22:24:55 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:54.993 22:24:55 -- setup/devices.sh@59 -- # local pci status 00:04:54.993 22:24:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.993 22:24:55 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:54.993 22:24:55 -- setup/devices.sh@47 -- # setup output config 00:04:54.993 22:24:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.993 22:24:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:55.252 22:24:55 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:55.252 22:24:55 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:55.252 22:24:55 -- setup/devices.sh@63 -- # found=1 00:04:55.252 22:24:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.252 22:24:55 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:55.252 22:24:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.511 22:24:56 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:55.511 22:24:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.770 22:24:56 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:55.770 22:24:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.770 22:24:56 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:55.770 22:24:56 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:55.770 22:24:56 -- setup/devices.sh@68 -- # return 0 00:04:55.770 22:24:56 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:55.770 22:24:56 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.770 22:24:56 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:55.770 22:24:56 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:55.770 22:24:56 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:55.770 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:55.770 00:04:55.770 real 0m4.642s 00:04:55.770 user 0m1.084s 00:04:55.770 sys 0m1.250s 00:04:55.770 22:24:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:55.770 22:24:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.770 ************************************ 00:04:55.770 END TEST nvme_mount 00:04:55.770 ************************************ 00:04:55.770 22:24:56 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:55.770 22:24:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.770 22:24:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.770 22:24:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.770 ************************************ 00:04:55.770 START TEST dm_mount 00:04:55.770 ************************************ 00:04:55.770 22:24:56 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:55.770 22:24:56 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:55.770 22:24:56 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:55.770 22:24:56 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:55.770 22:24:56 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:55.770 22:24:56 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:55.770 22:24:56 -- setup/common.sh@40 -- # local part_no=2 00:04:55.770 22:24:56 -- setup/common.sh@41 -- # local size=1073741824 00:04:55.770 22:24:56 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:55.770 22:24:56 -- setup/common.sh@44 -- # parts=() 00:04:55.770 22:24:56 -- setup/common.sh@44 -- # local parts 00:04:55.770 22:24:56 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:55.770 22:24:56 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.770 22:24:56 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:55.770 22:24:56 -- setup/common.sh@46 -- # (( part++ )) 00:04:55.770 22:24:56 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.770 22:24:56 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:55.770 22:24:56 -- setup/common.sh@46 -- # (( part++ )) 00:04:55.770 22:24:56 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.770 22:24:56 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:55.770 22:24:56 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:55.770 22:24:56 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:57.148 Creating new GPT entries in memory. 00:04:57.148 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:57.148 other utilities. 00:04:57.148 22:24:57 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:57.148 22:24:57 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.148 22:24:57 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:57.148 22:24:57 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:57.148 22:24:57 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:58.083 Creating new GPT entries in memory. 00:04:58.083 The operation has completed successfully. 00:04:58.083 22:24:58 -- setup/common.sh@57 -- # (( part++ )) 00:04:58.083 22:24:58 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:58.083 22:24:58 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:58.083 22:24:58 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:58.083 22:24:58 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:59.019 The operation has completed successfully. 00:04:59.019 22:24:59 -- setup/common.sh@57 -- # (( part++ )) 00:04:59.019 22:24:59 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.019 22:24:59 -- setup/common.sh@62 -- # wait 66026 00:04:59.019 22:24:59 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:59.019 22:24:59 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.019 22:24:59 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:59.019 22:24:59 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:59.019 22:24:59 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:59.019 22:24:59 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:59.019 22:24:59 -- setup/devices.sh@161 -- # break 00:04:59.019 22:24:59 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:59.019 22:24:59 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:59.019 22:24:59 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:59.019 22:24:59 -- setup/devices.sh@166 -- # dm=dm-0 00:04:59.019 22:24:59 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:59.019 22:24:59 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:59.019 22:24:59 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.019 22:24:59 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:59.019 22:24:59 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.019 22:24:59 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:59.019 22:24:59 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:59.019 22:24:59 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.019 22:24:59 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:59.019 22:24:59 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:59.019 22:24:59 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:59.019 22:24:59 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.019 22:24:59 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:59.019 22:24:59 -- setup/devices.sh@53 -- # local found=0 00:04:59.019 22:24:59 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:59.019 22:24:59 -- setup/devices.sh@56 -- # : 00:04:59.019 22:24:59 -- setup/devices.sh@59 -- # local pci status 00:04:59.019 22:24:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.019 22:24:59 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:59.019 22:24:59 -- setup/devices.sh@47 -- # setup output config 00:04:59.019 22:24:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.019 22:24:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:59.278 22:24:59 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:59.278 22:24:59 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:59.278 22:24:59 -- setup/devices.sh@63 -- # found=1 00:04:59.278 22:24:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.278 22:24:59 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:59.278 22:24:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.536 22:25:00 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:59.536 22:25:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.796 22:25:00 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:59.796 22:25:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.796 22:25:00 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:59.796 22:25:00 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:59.796 22:25:00 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.796 22:25:00 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:59.796 22:25:00 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:59.796 22:25:00 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.796 22:25:00 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:59.796 22:25:00 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:59.796 22:25:00 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:59.796 22:25:00 -- setup/devices.sh@50 -- # local mount_point= 00:04:59.796 22:25:00 -- setup/devices.sh@51 -- # local test_file= 00:04:59.796 22:25:00 -- setup/devices.sh@53 -- # local found=0 00:04:59.796 22:25:00 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:59.796 22:25:00 -- setup/devices.sh@59 -- # local pci status 00:04:59.796 22:25:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.796 22:25:00 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:59.796 22:25:00 -- setup/devices.sh@47 -- # setup output config 00:04:59.796 22:25:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.796 22:25:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:00.054 22:25:00 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.054 22:25:00 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:00.054 22:25:00 -- setup/devices.sh@63 -- # found=1 00:05:00.054 22:25:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.054 22:25:00 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.054 22:25:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.313 22:25:00 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.313 22:25:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.313 22:25:01 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.313 22:25:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.572 22:25:01 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.572 22:25:01 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:00.572 22:25:01 -- setup/devices.sh@68 -- # return 0 00:05:00.572 22:25:01 -- setup/devices.sh@187 -- # cleanup_dm 00:05:00.572 22:25:01 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.572 22:25:01 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:00.572 22:25:01 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:00.572 22:25:01 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:00.572 22:25:01 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:00.572 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:00.572 22:25:01 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:00.572 22:25:01 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:00.572 00:05:00.572 real 0m4.718s 00:05:00.572 user 0m0.721s 00:05:00.572 sys 0m0.915s 00:05:00.572 22:25:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:00.572 22:25:01 -- common/autotest_common.sh@10 -- # set +x 00:05:00.572 ************************************ 00:05:00.572 END TEST dm_mount 00:05:00.572 ************************************ 00:05:00.572 22:25:01 -- setup/devices.sh@1 -- # cleanup 00:05:00.572 22:25:01 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:00.572 22:25:01 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.572 22:25:01 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:00.572 22:25:01 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:00.572 22:25:01 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:00.572 22:25:01 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:00.830 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:00.830 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:00.830 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:00.830 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:00.830 22:25:01 -- setup/devices.sh@12 -- # cleanup_dm 00:05:00.830 22:25:01 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.830 22:25:01 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:00.830 22:25:01 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:00.830 22:25:01 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:00.830 22:25:01 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:00.830 22:25:01 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:00.830 00:05:00.830 real 0m11.060s 00:05:00.830 user 0m2.555s 00:05:00.830 sys 0m2.827s 00:05:00.830 22:25:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:00.830 22:25:01 -- common/autotest_common.sh@10 -- # set +x 00:05:00.830 ************************************ 00:05:00.830 END TEST devices 00:05:00.830 ************************************ 00:05:00.830 00:05:00.830 real 0m24.004s 00:05:00.830 user 0m8.180s 00:05:00.830 sys 0m10.078s 00:05:00.830 22:25:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:00.830 22:25:01 -- common/autotest_common.sh@10 -- # set +x 00:05:00.830 ************************************ 00:05:00.830 END TEST setup.sh 00:05:00.830 ************************************ 00:05:01.088 22:25:01 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:01.088 Hugepages 00:05:01.088 node hugesize free / total 00:05:01.088 node0 1048576kB 0 / 0 00:05:01.088 node0 2048kB 2048 / 2048 00:05:01.088 00:05:01.088 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:01.347 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:01.347 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:01.347 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:01.347 22:25:02 -- spdk/autotest.sh@128 -- # uname -s 00:05:01.347 22:25:02 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:01.347 22:25:02 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:01.347 22:25:02 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.286 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.286 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.286 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.286 22:25:02 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:03.223 22:25:03 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:03.223 22:25:03 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:03.223 22:25:03 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:03.223 22:25:03 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:03.223 22:25:03 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:03.223 22:25:03 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:03.224 22:25:03 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:03.224 22:25:03 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:03.224 22:25:03 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:03.482 22:25:03 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:03.482 22:25:04 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:03.482 22:25:04 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:03.740 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.740 Waiting for block devices as requested 00:05:03.740 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:03.999 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:03.999 22:25:04 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:03.999 22:25:04 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:03.999 22:25:04 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:03.999 22:25:04 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:03.999 22:25:04 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:03.999 22:25:04 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:03.999 22:25:04 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:03.999 22:25:04 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:03.999 22:25:04 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:03.999 22:25:04 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:03.999 22:25:04 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:03.999 22:25:04 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:03.999 22:25:04 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:03.999 22:25:04 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:03.999 22:25:04 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:03.999 22:25:04 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:03.999 22:25:04 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:03.999 22:25:04 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:03.999 22:25:04 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:03.999 22:25:04 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:03.999 22:25:04 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:03.999 22:25:04 -- common/autotest_common.sh@1552 -- # continue 00:05:03.999 22:25:04 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:03.999 22:25:04 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:03.999 22:25:04 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:03.999 22:25:04 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:03.999 22:25:04 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:03.999 22:25:04 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:03.999 22:25:04 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:03.999 22:25:04 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:03.999 22:25:04 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:03.999 22:25:04 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:03.999 22:25:04 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:03.999 22:25:04 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:03.999 22:25:04 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:03.999 22:25:04 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:03.999 22:25:04 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:03.999 22:25:04 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:03.999 22:25:04 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:03.999 22:25:04 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:03.999 22:25:04 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:03.999 22:25:04 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:03.999 22:25:04 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:03.999 22:25:04 -- common/autotest_common.sh@1552 -- # continue 00:05:04.000 22:25:04 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:04.000 22:25:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:04.000 22:25:04 -- common/autotest_common.sh@10 -- # set +x 00:05:04.000 22:25:04 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:04.000 22:25:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:04.000 22:25:04 -- common/autotest_common.sh@10 -- # set +x 00:05:04.000 22:25:04 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:04.936 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.936 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.936 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.936 22:25:05 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:04.936 22:25:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:04.936 22:25:05 -- common/autotest_common.sh@10 -- # set +x 00:05:05.195 22:25:05 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:05.195 22:25:05 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:05.195 22:25:05 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:05.195 22:25:05 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:05.195 22:25:05 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:05.195 22:25:05 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:05.195 22:25:05 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:05.195 22:25:05 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:05.195 22:25:05 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:05.195 22:25:05 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:05.195 22:25:05 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:05.195 22:25:05 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:05.195 22:25:05 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:05.195 22:25:05 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:05.195 22:25:05 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:05.195 22:25:05 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:05.195 22:25:05 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.195 22:25:05 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:05.195 22:25:05 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:05.195 22:25:05 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:05.195 22:25:05 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.195 22:25:05 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:05.195 22:25:05 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:05.195 22:25:05 -- common/autotest_common.sh@1588 -- # return 0 00:05:05.195 22:25:05 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:05.195 22:25:05 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:05.195 22:25:05 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:05.195 22:25:05 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:05.195 22:25:05 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:05.195 22:25:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:05.195 22:25:05 -- common/autotest_common.sh@10 -- # set +x 00:05:05.195 22:25:05 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:05.195 22:25:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.195 22:25:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.195 22:25:05 -- common/autotest_common.sh@10 -- # set +x 00:05:05.195 ************************************ 00:05:05.195 START TEST env 00:05:05.195 ************************************ 00:05:05.195 22:25:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:05.195 * Looking for test storage... 00:05:05.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:05.195 22:25:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:05.195 22:25:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:05.195 22:25:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:05.454 22:25:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:05.454 22:25:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:05.454 22:25:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:05.454 22:25:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:05.454 22:25:05 -- scripts/common.sh@335 -- # IFS=.-: 00:05:05.454 22:25:05 -- scripts/common.sh@335 -- # read -ra ver1 00:05:05.454 22:25:05 -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.454 22:25:05 -- scripts/common.sh@336 -- # read -ra ver2 00:05:05.454 22:25:05 -- scripts/common.sh@337 -- # local 'op=<' 00:05:05.454 22:25:05 -- scripts/common.sh@339 -- # ver1_l=2 00:05:05.454 22:25:05 -- scripts/common.sh@340 -- # ver2_l=1 00:05:05.454 22:25:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:05.454 22:25:05 -- scripts/common.sh@343 -- # case "$op" in 00:05:05.454 22:25:05 -- scripts/common.sh@344 -- # : 1 00:05:05.454 22:25:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:05.454 22:25:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.454 22:25:05 -- scripts/common.sh@364 -- # decimal 1 00:05:05.454 22:25:05 -- scripts/common.sh@352 -- # local d=1 00:05:05.454 22:25:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.454 22:25:05 -- scripts/common.sh@354 -- # echo 1 00:05:05.454 22:25:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:05.454 22:25:05 -- scripts/common.sh@365 -- # decimal 2 00:05:05.454 22:25:05 -- scripts/common.sh@352 -- # local d=2 00:05:05.454 22:25:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.454 22:25:05 -- scripts/common.sh@354 -- # echo 2 00:05:05.454 22:25:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:05.454 22:25:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:05.454 22:25:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:05.454 22:25:05 -- scripts/common.sh@367 -- # return 0 00:05:05.454 22:25:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.454 22:25:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:05.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.454 --rc genhtml_branch_coverage=1 00:05:05.454 --rc genhtml_function_coverage=1 00:05:05.454 --rc genhtml_legend=1 00:05:05.454 --rc geninfo_all_blocks=1 00:05:05.454 --rc geninfo_unexecuted_blocks=1 00:05:05.454 00:05:05.454 ' 00:05:05.454 22:25:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:05.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.454 --rc genhtml_branch_coverage=1 00:05:05.454 --rc genhtml_function_coverage=1 00:05:05.454 --rc genhtml_legend=1 00:05:05.454 --rc geninfo_all_blocks=1 00:05:05.454 --rc geninfo_unexecuted_blocks=1 00:05:05.454 00:05:05.454 ' 00:05:05.454 22:25:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:05.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.454 --rc genhtml_branch_coverage=1 00:05:05.454 --rc genhtml_function_coverage=1 00:05:05.454 --rc genhtml_legend=1 00:05:05.454 --rc geninfo_all_blocks=1 00:05:05.454 --rc geninfo_unexecuted_blocks=1 00:05:05.454 00:05:05.454 ' 00:05:05.454 22:25:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:05.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.454 --rc genhtml_branch_coverage=1 00:05:05.454 --rc genhtml_function_coverage=1 00:05:05.454 --rc genhtml_legend=1 00:05:05.454 --rc geninfo_all_blocks=1 00:05:05.454 --rc geninfo_unexecuted_blocks=1 00:05:05.455 00:05:05.455 ' 00:05:05.455 22:25:05 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:05.455 22:25:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.455 22:25:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.455 22:25:05 -- common/autotest_common.sh@10 -- # set +x 00:05:05.455 ************************************ 00:05:05.455 START TEST env_memory 00:05:05.455 ************************************ 00:05:05.455 22:25:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:05.455 00:05:05.455 00:05:05.455 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.455 http://cunit.sourceforge.net/ 00:05:05.455 00:05:05.455 00:05:05.455 Suite: memory 00:05:05.455 Test: alloc and free memory map ...[2024-11-20 22:25:06.049315] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:05.455 passed 00:05:05.455 Test: mem map translation ...[2024-11-20 22:25:06.080737] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:05.455 [2024-11-20 22:25:06.080944] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:05.455 [2024-11-20 22:25:06.081221] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:05.455 [2024-11-20 22:25:06.081429] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:05.455 passed 00:05:05.455 Test: mem map registration ...[2024-11-20 22:25:06.145728] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:05.455 [2024-11-20 22:25:06.145921] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:05.455 passed 00:05:05.714 Test: mem map adjacent registrations ...passed 00:05:05.714 00:05:05.714 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.714 suites 1 1 n/a 0 0 00:05:05.714 tests 4 4 4 0 0 00:05:05.714 asserts 152 152 152 0 n/a 00:05:05.714 00:05:05.714 Elapsed time = 0.213 seconds 00:05:05.714 00:05:05.714 real 0m0.237s 00:05:05.714 user 0m0.213s 00:05:05.714 sys 0m0.016s 00:05:05.714 22:25:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:05.714 22:25:06 -- common/autotest_common.sh@10 -- # set +x 00:05:05.714 ************************************ 00:05:05.714 END TEST env_memory 00:05:05.714 ************************************ 00:05:05.714 22:25:06 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:05.714 22:25:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.714 22:25:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.714 22:25:06 -- common/autotest_common.sh@10 -- # set +x 00:05:05.714 ************************************ 00:05:05.714 START TEST env_vtophys 00:05:05.714 ************************************ 00:05:05.714 22:25:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:05.714 EAL: lib.eal log level changed from notice to debug 00:05:05.714 EAL: Detected lcore 0 as core 0 on socket 0 00:05:05.714 EAL: Detected lcore 1 as core 0 on socket 0 00:05:05.714 EAL: Detected lcore 2 as core 0 on socket 0 00:05:05.714 EAL: Detected lcore 3 as core 0 on socket 0 00:05:05.714 EAL: Detected lcore 4 as core 0 on socket 0 00:05:05.714 EAL: Detected lcore 5 as core 0 on socket 0 00:05:05.714 EAL: Detected lcore 6 as core 0 on socket 0 00:05:05.714 EAL: Detected lcore 7 as core 0 on socket 0 00:05:05.714 EAL: Detected lcore 8 as core 0 on socket 0 00:05:05.714 EAL: Detected lcore 9 as core 0 on socket 0 00:05:05.714 EAL: Maximum logical cores by configuration: 128 00:05:05.714 EAL: Detected CPU lcores: 10 00:05:05.714 EAL: Detected NUMA nodes: 1 00:05:05.714 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:05.714 EAL: Detected shared linkage of DPDK 00:05:05.714 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:05.714 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:05.714 EAL: Registered [vdev] bus. 00:05:05.714 EAL: bus.vdev log level changed from disabled to notice 00:05:05.714 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:05.714 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:05.714 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:05.714 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:05.714 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:05.714 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:05.714 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:05.714 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:05.714 EAL: No shared files mode enabled, IPC will be disabled 00:05:05.714 EAL: No shared files mode enabled, IPC is disabled 00:05:05.714 EAL: Selected IOVA mode 'PA' 00:05:05.714 EAL: Probing VFIO support... 00:05:05.714 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:05.714 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:05.714 EAL: Ask a virtual area of 0x2e000 bytes 00:05:05.714 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:05.714 EAL: Setting up physically contiguous memory... 00:05:05.714 EAL: Setting maximum number of open files to 524288 00:05:05.714 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:05.714 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:05.714 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.714 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:05.714 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.714 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.714 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:05.714 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:05.714 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.715 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:05.715 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.715 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.715 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:05.715 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:05.715 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.715 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:05.715 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.715 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.715 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:05.715 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:05.715 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.715 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:05.715 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.715 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.715 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:05.715 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:05.715 EAL: Hugepages will be freed exactly as allocated. 00:05:05.715 EAL: No shared files mode enabled, IPC is disabled 00:05:05.715 EAL: No shared files mode enabled, IPC is disabled 00:05:05.715 EAL: TSC frequency is ~2200000 KHz 00:05:05.715 EAL: Main lcore 0 is ready (tid=7f7a874f3a00;cpuset=[0]) 00:05:05.715 EAL: Trying to obtain current memory policy. 00:05:05.715 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.715 EAL: Restoring previous memory policy: 0 00:05:05.715 EAL: request: mp_malloc_sync 00:05:05.715 EAL: No shared files mode enabled, IPC is disabled 00:05:05.715 EAL: Heap on socket 0 was expanded by 2MB 00:05:05.715 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:05.715 EAL: No shared files mode enabled, IPC is disabled 00:05:05.715 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:05.715 EAL: Mem event callback 'spdk:(nil)' registered 00:05:05.715 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:05.974 00:05:05.974 00:05:05.974 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.974 http://cunit.sourceforge.net/ 00:05:05.974 00:05:05.974 00:05:05.974 Suite: components_suite 00:05:05.974 Test: vtophys_malloc_test ...passed 00:05:05.974 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:05.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.974 EAL: Restoring previous memory policy: 4 00:05:05.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.974 EAL: request: mp_malloc_sync 00:05:05.974 EAL: No shared files mode enabled, IPC is disabled 00:05:05.974 EAL: Heap on socket 0 was expanded by 4MB 00:05:05.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.974 EAL: request: mp_malloc_sync 00:05:05.974 EAL: No shared files mode enabled, IPC is disabled 00:05:05.974 EAL: Heap on socket 0 was shrunk by 4MB 00:05:05.974 EAL: Trying to obtain current memory policy. 00:05:05.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.974 EAL: Restoring previous memory policy: 4 00:05:05.975 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.975 EAL: request: mp_malloc_sync 00:05:05.975 EAL: No shared files mode enabled, IPC is disabled 00:05:05.975 EAL: Heap on socket 0 was expanded by 6MB 00:05:05.975 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.975 EAL: request: mp_malloc_sync 00:05:05.975 EAL: No shared files mode enabled, IPC is disabled 00:05:05.975 EAL: Heap on socket 0 was shrunk by 6MB 00:05:05.975 EAL: Trying to obtain current memory policy. 00:05:05.975 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.975 EAL: Restoring previous memory policy: 4 00:05:05.975 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.975 EAL: request: mp_malloc_sync 00:05:05.975 EAL: No shared files mode enabled, IPC is disabled 00:05:05.975 EAL: Heap on socket 0 was expanded by 10MB 00:05:05.975 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.975 EAL: request: mp_malloc_sync 00:05:05.975 EAL: No shared files mode enabled, IPC is disabled 00:05:05.975 EAL: Heap on socket 0 was shrunk by 10MB 00:05:05.975 EAL: Trying to obtain current memory policy. 00:05:05.975 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.975 EAL: Restoring previous memory policy: 4 00:05:05.975 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.975 EAL: request: mp_malloc_sync 00:05:05.975 EAL: No shared files mode enabled, IPC is disabled 00:05:05.975 EAL: Heap on socket 0 was expanded by 18MB 00:05:05.975 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.975 EAL: request: mp_malloc_sync 00:05:05.975 EAL: No shared files mode enabled, IPC is disabled 00:05:05.975 EAL: Heap on socket 0 was shrunk by 18MB 00:05:05.975 EAL: Trying to obtain current memory policy. 00:05:05.975 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.975 EAL: Restoring previous memory policy: 4 00:05:05.975 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.975 EAL: request: mp_malloc_sync 00:05:05.975 EAL: No shared files mode enabled, IPC is disabled 00:05:05.975 EAL: Heap on socket 0 was expanded by 34MB 00:05:05.975 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.975 EAL: request: mp_malloc_sync 00:05:05.975 EAL: No shared files mode enabled, IPC is disabled 00:05:05.975 EAL: Heap on socket 0 was shrunk by 34MB 00:05:05.975 EAL: Trying to obtain current memory policy. 00:05:05.975 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.975 EAL: Restoring previous memory policy: 4 00:05:05.975 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.975 EAL: request: mp_malloc_sync 00:05:05.975 EAL: No shared files mode enabled, IPC is disabled 00:05:05.975 EAL: Heap on socket 0 was expanded by 66MB 00:05:05.975 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.975 EAL: request: mp_malloc_sync 00:05:05.975 EAL: No shared files mode enabled, IPC is disabled 00:05:05.975 EAL: Heap on socket 0 was shrunk by 66MB 00:05:05.975 EAL: Trying to obtain current memory policy. 00:05:05.975 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.975 EAL: Restoring previous memory policy: 4 00:05:05.975 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.975 EAL: request: mp_malloc_sync 00:05:05.975 EAL: No shared files mode enabled, IPC is disabled 00:05:05.975 EAL: Heap on socket 0 was expanded by 130MB 00:05:05.975 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.975 EAL: request: mp_malloc_sync 00:05:05.975 EAL: No shared files mode enabled, IPC is disabled 00:05:05.975 EAL: Heap on socket 0 was shrunk by 130MB 00:05:05.975 EAL: Trying to obtain current memory policy. 00:05:05.975 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.234 EAL: Restoring previous memory policy: 4 00:05:06.234 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.234 EAL: request: mp_malloc_sync 00:05:06.234 EAL: No shared files mode enabled, IPC is disabled 00:05:06.234 EAL: Heap on socket 0 was expanded by 258MB 00:05:06.234 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.234 EAL: request: mp_malloc_sync 00:05:06.234 EAL: No shared files mode enabled, IPC is disabled 00:05:06.234 EAL: Heap on socket 0 was shrunk by 258MB 00:05:06.234 EAL: Trying to obtain current memory policy. 00:05:06.234 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.492 EAL: Restoring previous memory policy: 4 00:05:06.492 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.492 EAL: request: mp_malloc_sync 00:05:06.492 EAL: No shared files mode enabled, IPC is disabled 00:05:06.492 EAL: Heap on socket 0 was expanded by 514MB 00:05:06.492 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.751 EAL: request: mp_malloc_sync 00:05:06.751 EAL: No shared files mode enabled, IPC is disabled 00:05:06.751 EAL: Heap on socket 0 was shrunk by 514MB 00:05:06.751 EAL: Trying to obtain current memory policy. 00:05:06.751 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.010 EAL: Restoring previous memory policy: 4 00:05:07.010 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.010 EAL: request: mp_malloc_sync 00:05:07.010 EAL: No shared files mode enabled, IPC is disabled 00:05:07.010 EAL: Heap on socket 0 was expanded by 1026MB 00:05:07.273 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.556 passed 00:05:07.556 00:05:07.556 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.556 suites 1 1 n/a 0 0 00:05:07.556 tests 2 2 2 0 0 00:05:07.556 asserts 5323 5323 5323 0 n/a 00:05:07.556 00:05:07.556 Elapsed time = 1.720 seconds 00:05:07.556 EAL: request: mp_malloc_sync 00:05:07.556 EAL: No shared files mode enabled, IPC is disabled 00:05:07.556 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:07.556 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.556 EAL: request: mp_malloc_sync 00:05:07.556 EAL: No shared files mode enabled, IPC is disabled 00:05:07.556 EAL: Heap on socket 0 was shrunk by 2MB 00:05:07.556 EAL: No shared files mode enabled, IPC is disabled 00:05:07.556 EAL: No shared files mode enabled, IPC is disabled 00:05:07.556 EAL: No shared files mode enabled, IPC is disabled 00:05:07.556 00:05:07.556 real 0m1.922s 00:05:07.556 user 0m1.083s 00:05:07.556 sys 0m0.700s 00:05:07.556 22:25:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:07.556 ************************************ 00:05:07.556 END TEST env_vtophys 00:05:07.556 ************************************ 00:05:07.556 22:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:07.556 22:25:08 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:07.556 22:25:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.556 22:25:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.556 22:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:07.556 ************************************ 00:05:07.556 START TEST env_pci 00:05:07.556 ************************************ 00:05:07.556 22:25:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:07.859 00:05:07.860 00:05:07.860 CUnit - A unit testing framework for C - Version 2.1-3 00:05:07.860 http://cunit.sourceforge.net/ 00:05:07.860 00:05:07.860 00:05:07.860 Suite: pci 00:05:07.860 Test: pci_hook ...[2024-11-20 22:25:08.288964] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 67176 has claimed it 00:05:07.860 passed 00:05:07.860 00:05:07.860 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.860 suites 1 1 n/a 0 0 00:05:07.860 tests 1 1 1 0 0 00:05:07.860 asserts 25 25 25 0 n/a 00:05:07.860 00:05:07.860 Elapsed time = 0.003 seconds 00:05:07.860 EAL: Cannot find device (10000:00:01.0) 00:05:07.860 EAL: Failed to attach device on primary process 00:05:07.860 ************************************ 00:05:07.860 END TEST env_pci 00:05:07.860 ************************************ 00:05:07.860 00:05:07.860 real 0m0.023s 00:05:07.860 user 0m0.011s 00:05:07.860 sys 0m0.012s 00:05:07.860 22:25:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:07.860 22:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:07.860 22:25:08 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:07.860 22:25:08 -- env/env.sh@15 -- # uname 00:05:07.860 22:25:08 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:07.860 22:25:08 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:07.860 22:25:08 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:07.860 22:25:08 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:07.860 22:25:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.860 22:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:07.860 ************************************ 00:05:07.860 START TEST env_dpdk_post_init 00:05:07.860 ************************************ 00:05:07.860 22:25:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:07.860 EAL: Detected CPU lcores: 10 00:05:07.860 EAL: Detected NUMA nodes: 1 00:05:07.860 EAL: Detected shared linkage of DPDK 00:05:07.860 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:07.860 EAL: Selected IOVA mode 'PA' 00:05:07.860 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:07.860 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:07.860 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:07.860 Starting DPDK initialization... 00:05:07.860 Starting SPDK post initialization... 00:05:07.860 SPDK NVMe probe 00:05:07.860 Attaching to 0000:00:06.0 00:05:07.860 Attaching to 0000:00:07.0 00:05:07.860 Attached to 0000:00:06.0 00:05:07.860 Attached to 0000:00:07.0 00:05:07.860 Cleaning up... 00:05:07.860 00:05:07.860 real 0m0.189s 00:05:07.860 user 0m0.055s 00:05:07.860 sys 0m0.036s 00:05:07.860 22:25:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:07.860 22:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:07.860 ************************************ 00:05:07.860 END TEST env_dpdk_post_init 00:05:07.860 ************************************ 00:05:08.119 22:25:08 -- env/env.sh@26 -- # uname 00:05:08.119 22:25:08 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:08.119 22:25:08 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:08.119 22:25:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.119 22:25:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.119 22:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:08.119 ************************************ 00:05:08.119 START TEST env_mem_callbacks 00:05:08.119 ************************************ 00:05:08.119 22:25:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:08.119 EAL: Detected CPU lcores: 10 00:05:08.119 EAL: Detected NUMA nodes: 1 00:05:08.119 EAL: Detected shared linkage of DPDK 00:05:08.119 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:08.119 EAL: Selected IOVA mode 'PA' 00:05:08.119 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:08.119 00:05:08.119 00:05:08.119 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.119 http://cunit.sourceforge.net/ 00:05:08.119 00:05:08.119 00:05:08.119 Suite: memory 00:05:08.119 Test: test ... 00:05:08.119 register 0x200000200000 2097152 00:05:08.119 malloc 3145728 00:05:08.119 register 0x200000400000 4194304 00:05:08.119 buf 0x200000500000 len 3145728 PASSED 00:05:08.119 malloc 64 00:05:08.119 buf 0x2000004fff40 len 64 PASSED 00:05:08.119 malloc 4194304 00:05:08.119 register 0x200000800000 6291456 00:05:08.119 buf 0x200000a00000 len 4194304 PASSED 00:05:08.119 free 0x200000500000 3145728 00:05:08.119 free 0x2000004fff40 64 00:05:08.119 unregister 0x200000400000 4194304 PASSED 00:05:08.119 free 0x200000a00000 4194304 00:05:08.119 unregister 0x200000800000 6291456 PASSED 00:05:08.119 malloc 8388608 00:05:08.119 register 0x200000400000 10485760 00:05:08.119 buf 0x200000600000 len 8388608 PASSED 00:05:08.119 free 0x200000600000 8388608 00:05:08.119 unregister 0x200000400000 10485760 PASSED 00:05:08.119 passed 00:05:08.119 00:05:08.119 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.119 suites 1 1 n/a 0 0 00:05:08.119 tests 1 1 1 0 0 00:05:08.119 asserts 15 15 15 0 n/a 00:05:08.119 00:05:08.119 Elapsed time = 0.010 seconds 00:05:08.119 00:05:08.119 real 0m0.145s 00:05:08.119 user 0m0.015s 00:05:08.119 sys 0m0.028s 00:05:08.119 22:25:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.119 22:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:08.119 ************************************ 00:05:08.119 END TEST env_mem_callbacks 00:05:08.119 ************************************ 00:05:08.119 00:05:08.119 real 0m3.018s 00:05:08.119 user 0m1.577s 00:05:08.119 sys 0m1.071s 00:05:08.119 22:25:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.119 22:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:08.119 ************************************ 00:05:08.119 END TEST env 00:05:08.119 ************************************ 00:05:08.377 22:25:08 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:08.377 22:25:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.377 22:25:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.377 22:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:08.377 ************************************ 00:05:08.377 START TEST rpc 00:05:08.377 ************************************ 00:05:08.377 22:25:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:08.377 * Looking for test storage... 00:05:08.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:08.377 22:25:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:08.377 22:25:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:08.377 22:25:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:08.377 22:25:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:08.377 22:25:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:08.377 22:25:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:08.377 22:25:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:08.377 22:25:09 -- scripts/common.sh@335 -- # IFS=.-: 00:05:08.377 22:25:09 -- scripts/common.sh@335 -- # read -ra ver1 00:05:08.377 22:25:09 -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.377 22:25:09 -- scripts/common.sh@336 -- # read -ra ver2 00:05:08.377 22:25:09 -- scripts/common.sh@337 -- # local 'op=<' 00:05:08.377 22:25:09 -- scripts/common.sh@339 -- # ver1_l=2 00:05:08.377 22:25:09 -- scripts/common.sh@340 -- # ver2_l=1 00:05:08.377 22:25:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:08.377 22:25:09 -- scripts/common.sh@343 -- # case "$op" in 00:05:08.377 22:25:09 -- scripts/common.sh@344 -- # : 1 00:05:08.377 22:25:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:08.377 22:25:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.377 22:25:09 -- scripts/common.sh@364 -- # decimal 1 00:05:08.377 22:25:09 -- scripts/common.sh@352 -- # local d=1 00:05:08.377 22:25:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.377 22:25:09 -- scripts/common.sh@354 -- # echo 1 00:05:08.377 22:25:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:08.377 22:25:09 -- scripts/common.sh@365 -- # decimal 2 00:05:08.377 22:25:09 -- scripts/common.sh@352 -- # local d=2 00:05:08.377 22:25:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.377 22:25:09 -- scripts/common.sh@354 -- # echo 2 00:05:08.377 22:25:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:08.377 22:25:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:08.377 22:25:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:08.377 22:25:09 -- scripts/common.sh@367 -- # return 0 00:05:08.377 22:25:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.377 22:25:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:08.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.377 --rc genhtml_branch_coverage=1 00:05:08.377 --rc genhtml_function_coverage=1 00:05:08.377 --rc genhtml_legend=1 00:05:08.377 --rc geninfo_all_blocks=1 00:05:08.377 --rc geninfo_unexecuted_blocks=1 00:05:08.377 00:05:08.377 ' 00:05:08.377 22:25:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:08.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.377 --rc genhtml_branch_coverage=1 00:05:08.377 --rc genhtml_function_coverage=1 00:05:08.377 --rc genhtml_legend=1 00:05:08.377 --rc geninfo_all_blocks=1 00:05:08.377 --rc geninfo_unexecuted_blocks=1 00:05:08.377 00:05:08.377 ' 00:05:08.377 22:25:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:08.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.377 --rc genhtml_branch_coverage=1 00:05:08.377 --rc genhtml_function_coverage=1 00:05:08.377 --rc genhtml_legend=1 00:05:08.377 --rc geninfo_all_blocks=1 00:05:08.377 --rc geninfo_unexecuted_blocks=1 00:05:08.377 00:05:08.377 ' 00:05:08.377 22:25:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:08.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.377 --rc genhtml_branch_coverage=1 00:05:08.377 --rc genhtml_function_coverage=1 00:05:08.377 --rc genhtml_legend=1 00:05:08.377 --rc geninfo_all_blocks=1 00:05:08.377 --rc geninfo_unexecuted_blocks=1 00:05:08.377 00:05:08.377 ' 00:05:08.377 22:25:09 -- rpc/rpc.sh@65 -- # spdk_pid=67298 00:05:08.377 22:25:09 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.377 22:25:09 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:08.377 22:25:09 -- rpc/rpc.sh@67 -- # waitforlisten 67298 00:05:08.377 22:25:09 -- common/autotest_common.sh@829 -- # '[' -z 67298 ']' 00:05:08.377 22:25:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.377 22:25:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:08.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.377 22:25:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.377 22:25:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:08.377 22:25:09 -- common/autotest_common.sh@10 -- # set +x 00:05:08.637 [2024-11-20 22:25:09.135498] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:08.637 [2024-11-20 22:25:09.135598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67298 ] 00:05:08.637 [2024-11-20 22:25:09.271762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.637 [2024-11-20 22:25:09.342342] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:08.637 [2024-11-20 22:25:09.342522] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:08.637 [2024-11-20 22:25:09.342539] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 67298' to capture a snapshot of events at runtime. 00:05:08.637 [2024-11-20 22:25:09.342551] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid67298 for offline analysis/debug. 00:05:08.637 [2024-11-20 22:25:09.342592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.574 22:25:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.574 22:25:10 -- common/autotest_common.sh@862 -- # return 0 00:05:09.574 22:25:10 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:09.574 22:25:10 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:09.574 22:25:10 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:09.574 22:25:10 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:09.574 22:25:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.574 22:25:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.574 22:25:10 -- common/autotest_common.sh@10 -- # set +x 00:05:09.574 ************************************ 00:05:09.574 START TEST rpc_integrity 00:05:09.574 ************************************ 00:05:09.574 22:25:10 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:09.574 22:25:10 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:09.574 22:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.574 22:25:10 -- common/autotest_common.sh@10 -- # set +x 00:05:09.574 22:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.574 22:25:10 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:09.574 22:25:10 -- rpc/rpc.sh@13 -- # jq length 00:05:09.574 22:25:10 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:09.574 22:25:10 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:09.574 22:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.574 22:25:10 -- common/autotest_common.sh@10 -- # set +x 00:05:09.574 22:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.574 22:25:10 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:09.574 22:25:10 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:09.574 22:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.574 22:25:10 -- common/autotest_common.sh@10 -- # set +x 00:05:09.574 22:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.574 22:25:10 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:09.574 { 00:05:09.574 "aliases": [ 00:05:09.574 "da7dd90d-d21b-443b-8e4d-8d02f18869a3" 00:05:09.574 ], 00:05:09.574 "assigned_rate_limits": { 00:05:09.574 "r_mbytes_per_sec": 0, 00:05:09.574 "rw_ios_per_sec": 0, 00:05:09.574 "rw_mbytes_per_sec": 0, 00:05:09.574 "w_mbytes_per_sec": 0 00:05:09.574 }, 00:05:09.574 "block_size": 512, 00:05:09.574 "claimed": false, 00:05:09.574 "driver_specific": {}, 00:05:09.574 "memory_domains": [ 00:05:09.574 { 00:05:09.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.574 "dma_device_type": 2 00:05:09.574 } 00:05:09.574 ], 00:05:09.574 "name": "Malloc0", 00:05:09.574 "num_blocks": 16384, 00:05:09.574 "product_name": "Malloc disk", 00:05:09.574 "supported_io_types": { 00:05:09.574 "abort": true, 00:05:09.574 "compare": false, 00:05:09.574 "compare_and_write": false, 00:05:09.574 "flush": true, 00:05:09.574 "nvme_admin": false, 00:05:09.574 "nvme_io": false, 00:05:09.574 "read": true, 00:05:09.574 "reset": true, 00:05:09.574 "unmap": true, 00:05:09.574 "write": true, 00:05:09.574 "write_zeroes": true 00:05:09.574 }, 00:05:09.574 "uuid": "da7dd90d-d21b-443b-8e4d-8d02f18869a3", 00:05:09.574 "zoned": false 00:05:09.574 } 00:05:09.574 ]' 00:05:09.574 22:25:10 -- rpc/rpc.sh@17 -- # jq length 00:05:09.574 22:25:10 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:09.574 22:25:10 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:09.574 22:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.574 22:25:10 -- common/autotest_common.sh@10 -- # set +x 00:05:09.833 [2024-11-20 22:25:10.306302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:09.833 [2024-11-20 22:25:10.306379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:09.833 [2024-11-20 22:25:10.306410] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1066490 00:05:09.833 [2024-11-20 22:25:10.306419] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:09.833 [2024-11-20 22:25:10.307833] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:09.833 [2024-11-20 22:25:10.307862] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:09.833 Passthru0 00:05:09.833 22:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.833 22:25:10 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:09.833 22:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.833 22:25:10 -- common/autotest_common.sh@10 -- # set +x 00:05:09.833 22:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.833 22:25:10 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:09.833 { 00:05:09.833 "aliases": [ 00:05:09.833 "da7dd90d-d21b-443b-8e4d-8d02f18869a3" 00:05:09.833 ], 00:05:09.833 "assigned_rate_limits": { 00:05:09.833 "r_mbytes_per_sec": 0, 00:05:09.833 "rw_ios_per_sec": 0, 00:05:09.833 "rw_mbytes_per_sec": 0, 00:05:09.833 "w_mbytes_per_sec": 0 00:05:09.833 }, 00:05:09.834 "block_size": 512, 00:05:09.834 "claim_type": "exclusive_write", 00:05:09.834 "claimed": true, 00:05:09.834 "driver_specific": {}, 00:05:09.834 "memory_domains": [ 00:05:09.834 { 00:05:09.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.834 "dma_device_type": 2 00:05:09.834 } 00:05:09.834 ], 00:05:09.834 "name": "Malloc0", 00:05:09.834 "num_blocks": 16384, 00:05:09.834 "product_name": "Malloc disk", 00:05:09.834 "supported_io_types": { 00:05:09.834 "abort": true, 00:05:09.834 "compare": false, 00:05:09.834 "compare_and_write": false, 00:05:09.834 "flush": true, 00:05:09.834 "nvme_admin": false, 00:05:09.834 "nvme_io": false, 00:05:09.834 "read": true, 00:05:09.834 "reset": true, 00:05:09.834 "unmap": true, 00:05:09.834 "write": true, 00:05:09.834 "write_zeroes": true 00:05:09.834 }, 00:05:09.834 "uuid": "da7dd90d-d21b-443b-8e4d-8d02f18869a3", 00:05:09.834 "zoned": false 00:05:09.834 }, 00:05:09.834 { 00:05:09.834 "aliases": [ 00:05:09.834 "ea3ebdd9-4c07-57e2-9c45-8029d23d9e75" 00:05:09.834 ], 00:05:09.834 "assigned_rate_limits": { 00:05:09.834 "r_mbytes_per_sec": 0, 00:05:09.834 "rw_ios_per_sec": 0, 00:05:09.834 "rw_mbytes_per_sec": 0, 00:05:09.834 "w_mbytes_per_sec": 0 00:05:09.834 }, 00:05:09.834 "block_size": 512, 00:05:09.834 "claimed": false, 00:05:09.834 "driver_specific": { 00:05:09.834 "passthru": { 00:05:09.834 "base_bdev_name": "Malloc0", 00:05:09.834 "name": "Passthru0" 00:05:09.834 } 00:05:09.834 }, 00:05:09.834 "memory_domains": [ 00:05:09.834 { 00:05:09.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.834 "dma_device_type": 2 00:05:09.834 } 00:05:09.834 ], 00:05:09.834 "name": "Passthru0", 00:05:09.834 "num_blocks": 16384, 00:05:09.834 "product_name": "passthru", 00:05:09.834 "supported_io_types": { 00:05:09.834 "abort": true, 00:05:09.834 "compare": false, 00:05:09.834 "compare_and_write": false, 00:05:09.834 "flush": true, 00:05:09.834 "nvme_admin": false, 00:05:09.834 "nvme_io": false, 00:05:09.834 "read": true, 00:05:09.834 "reset": true, 00:05:09.834 "unmap": true, 00:05:09.834 "write": true, 00:05:09.834 "write_zeroes": true 00:05:09.834 }, 00:05:09.834 "uuid": "ea3ebdd9-4c07-57e2-9c45-8029d23d9e75", 00:05:09.834 "zoned": false 00:05:09.834 } 00:05:09.834 ]' 00:05:09.834 22:25:10 -- rpc/rpc.sh@21 -- # jq length 00:05:09.834 22:25:10 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:09.834 22:25:10 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:09.834 22:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.834 22:25:10 -- common/autotest_common.sh@10 -- # set +x 00:05:09.834 22:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.834 22:25:10 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:09.834 22:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.834 22:25:10 -- common/autotest_common.sh@10 -- # set +x 00:05:09.834 22:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.834 22:25:10 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:09.834 22:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.834 22:25:10 -- common/autotest_common.sh@10 -- # set +x 00:05:09.834 22:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.834 22:25:10 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:09.834 22:25:10 -- rpc/rpc.sh@26 -- # jq length 00:05:09.834 22:25:10 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:09.834 00:05:09.834 real 0m0.309s 00:05:09.834 user 0m0.211s 00:05:09.834 sys 0m0.031s 00:05:09.834 22:25:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.834 ************************************ 00:05:09.834 22:25:10 -- common/autotest_common.sh@10 -- # set +x 00:05:09.834 END TEST rpc_integrity 00:05:09.834 ************************************ 00:05:09.834 22:25:10 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:09.834 22:25:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.834 22:25:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.834 22:25:10 -- common/autotest_common.sh@10 -- # set +x 00:05:09.834 ************************************ 00:05:09.834 START TEST rpc_plugins 00:05:09.834 ************************************ 00:05:09.834 22:25:10 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:09.834 22:25:10 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:09.834 22:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.834 22:25:10 -- common/autotest_common.sh@10 -- # set +x 00:05:09.834 22:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.834 22:25:10 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:09.834 22:25:10 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:09.834 22:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.834 22:25:10 -- common/autotest_common.sh@10 -- # set +x 00:05:09.834 22:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.834 22:25:10 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:09.834 { 00:05:09.834 "aliases": [ 00:05:09.834 "9b06b0c0-0179-4616-92c3-61322e8c4171" 00:05:09.834 ], 00:05:09.834 "assigned_rate_limits": { 00:05:09.834 "r_mbytes_per_sec": 0, 00:05:09.834 "rw_ios_per_sec": 0, 00:05:09.834 "rw_mbytes_per_sec": 0, 00:05:09.834 "w_mbytes_per_sec": 0 00:05:09.834 }, 00:05:09.834 "block_size": 4096, 00:05:09.834 "claimed": false, 00:05:09.834 "driver_specific": {}, 00:05:09.834 "memory_domains": [ 00:05:09.834 { 00:05:09.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.834 "dma_device_type": 2 00:05:09.834 } 00:05:09.834 ], 00:05:09.834 "name": "Malloc1", 00:05:09.834 "num_blocks": 256, 00:05:09.834 "product_name": "Malloc disk", 00:05:09.834 "supported_io_types": { 00:05:09.834 "abort": true, 00:05:09.834 "compare": false, 00:05:09.834 "compare_and_write": false, 00:05:09.834 "flush": true, 00:05:09.834 "nvme_admin": false, 00:05:09.834 "nvme_io": false, 00:05:09.834 "read": true, 00:05:09.834 "reset": true, 00:05:09.834 "unmap": true, 00:05:09.834 "write": true, 00:05:09.834 "write_zeroes": true 00:05:09.834 }, 00:05:09.834 "uuid": "9b06b0c0-0179-4616-92c3-61322e8c4171", 00:05:09.834 "zoned": false 00:05:09.834 } 00:05:09.834 ]' 00:05:09.834 22:25:10 -- rpc/rpc.sh@32 -- # jq length 00:05:10.093 22:25:10 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:10.093 22:25:10 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:10.093 22:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.093 22:25:10 -- common/autotest_common.sh@10 -- # set +x 00:05:10.093 22:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.093 22:25:10 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:10.093 22:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.093 22:25:10 -- common/autotest_common.sh@10 -- # set +x 00:05:10.093 22:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.093 22:25:10 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:10.093 22:25:10 -- rpc/rpc.sh@36 -- # jq length 00:05:10.093 22:25:10 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:10.093 00:05:10.093 real 0m0.160s 00:05:10.093 user 0m0.103s 00:05:10.093 sys 0m0.019s 00:05:10.093 22:25:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.093 ************************************ 00:05:10.093 22:25:10 -- common/autotest_common.sh@10 -- # set +x 00:05:10.093 END TEST rpc_plugins 00:05:10.093 ************************************ 00:05:10.093 22:25:10 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:10.093 22:25:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.093 22:25:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.093 22:25:10 -- common/autotest_common.sh@10 -- # set +x 00:05:10.093 ************************************ 00:05:10.093 START TEST rpc_trace_cmd_test 00:05:10.093 ************************************ 00:05:10.093 22:25:10 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:10.093 22:25:10 -- rpc/rpc.sh@40 -- # local info 00:05:10.093 22:25:10 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:10.093 22:25:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.093 22:25:10 -- common/autotest_common.sh@10 -- # set +x 00:05:10.093 22:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.093 22:25:10 -- rpc/rpc.sh@42 -- # info='{ 00:05:10.093 "bdev": { 00:05:10.093 "mask": "0x8", 00:05:10.093 "tpoint_mask": "0xffffffffffffffff" 00:05:10.093 }, 00:05:10.093 "bdev_nvme": { 00:05:10.093 "mask": "0x4000", 00:05:10.093 "tpoint_mask": "0x0" 00:05:10.093 }, 00:05:10.093 "blobfs": { 00:05:10.093 "mask": "0x80", 00:05:10.093 "tpoint_mask": "0x0" 00:05:10.093 }, 00:05:10.093 "dsa": { 00:05:10.093 "mask": "0x200", 00:05:10.093 "tpoint_mask": "0x0" 00:05:10.093 }, 00:05:10.093 "ftl": { 00:05:10.093 "mask": "0x40", 00:05:10.093 "tpoint_mask": "0x0" 00:05:10.093 }, 00:05:10.093 "iaa": { 00:05:10.093 "mask": "0x1000", 00:05:10.093 "tpoint_mask": "0x0" 00:05:10.093 }, 00:05:10.093 "iscsi_conn": { 00:05:10.093 "mask": "0x2", 00:05:10.093 "tpoint_mask": "0x0" 00:05:10.093 }, 00:05:10.093 "nvme_pcie": { 00:05:10.093 "mask": "0x800", 00:05:10.093 "tpoint_mask": "0x0" 00:05:10.093 }, 00:05:10.093 "nvme_tcp": { 00:05:10.093 "mask": "0x2000", 00:05:10.093 "tpoint_mask": "0x0" 00:05:10.093 }, 00:05:10.093 "nvmf_rdma": { 00:05:10.093 "mask": "0x10", 00:05:10.093 "tpoint_mask": "0x0" 00:05:10.093 }, 00:05:10.093 "nvmf_tcp": { 00:05:10.093 "mask": "0x20", 00:05:10.093 "tpoint_mask": "0x0" 00:05:10.094 }, 00:05:10.094 "scsi": { 00:05:10.094 "mask": "0x4", 00:05:10.094 "tpoint_mask": "0x0" 00:05:10.094 }, 00:05:10.094 "thread": { 00:05:10.094 "mask": "0x400", 00:05:10.094 "tpoint_mask": "0x0" 00:05:10.094 }, 00:05:10.094 "tpoint_group_mask": "0x8", 00:05:10.094 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid67298" 00:05:10.094 }' 00:05:10.094 22:25:10 -- rpc/rpc.sh@43 -- # jq length 00:05:10.094 22:25:10 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:10.094 22:25:10 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:10.352 22:25:10 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:10.352 22:25:10 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:10.352 22:25:10 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:10.352 22:25:10 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:10.352 22:25:10 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:10.352 22:25:10 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:10.352 22:25:11 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:10.352 00:05:10.352 real 0m0.275s 00:05:10.352 user 0m0.239s 00:05:10.352 sys 0m0.027s 00:05:10.352 22:25:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.353 22:25:11 -- common/autotest_common.sh@10 -- # set +x 00:05:10.353 ************************************ 00:05:10.353 END TEST rpc_trace_cmd_test 00:05:10.353 ************************************ 00:05:10.353 22:25:11 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:10.353 22:25:11 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:10.353 22:25:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.353 22:25:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.353 22:25:11 -- common/autotest_common.sh@10 -- # set +x 00:05:10.353 ************************************ 00:05:10.353 START TEST go_rpc 00:05:10.353 ************************************ 00:05:10.353 22:25:11 -- common/autotest_common.sh@1114 -- # go_rpc 00:05:10.353 22:25:11 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:10.353 22:25:11 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:10.353 22:25:11 -- rpc/rpc.sh@52 -- # jq length 00:05:10.612 22:25:11 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:10.612 22:25:11 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:10.612 22:25:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.612 22:25:11 -- common/autotest_common.sh@10 -- # set +x 00:05:10.612 22:25:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.612 22:25:11 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:10.612 22:25:11 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:10.612 22:25:11 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["1c34ef1f-4bea-4637-95af-35440067dcc0"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"1c34ef1f-4bea-4637-95af-35440067dcc0","zoned":false}]' 00:05:10.612 22:25:11 -- rpc/rpc.sh@57 -- # jq length 00:05:10.612 22:25:11 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:10.612 22:25:11 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:10.612 22:25:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.612 22:25:11 -- common/autotest_common.sh@10 -- # set +x 00:05:10.612 22:25:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.612 22:25:11 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:10.612 22:25:11 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:10.612 22:25:11 -- rpc/rpc.sh@61 -- # jq length 00:05:10.612 22:25:11 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:10.612 00:05:10.612 real 0m0.226s 00:05:10.612 user 0m0.154s 00:05:10.612 sys 0m0.038s 00:05:10.612 22:25:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.612 22:25:11 -- common/autotest_common.sh@10 -- # set +x 00:05:10.612 ************************************ 00:05:10.612 END TEST go_rpc 00:05:10.612 ************************************ 00:05:10.612 22:25:11 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:10.612 22:25:11 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:10.612 22:25:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.612 22:25:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.612 22:25:11 -- common/autotest_common.sh@10 -- # set +x 00:05:10.612 ************************************ 00:05:10.612 START TEST rpc_daemon_integrity 00:05:10.612 ************************************ 00:05:10.612 22:25:11 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:10.612 22:25:11 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:10.612 22:25:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.612 22:25:11 -- common/autotest_common.sh@10 -- # set +x 00:05:10.871 22:25:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.871 22:25:11 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:10.871 22:25:11 -- rpc/rpc.sh@13 -- # jq length 00:05:10.871 22:25:11 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:10.871 22:25:11 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:10.871 22:25:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.871 22:25:11 -- common/autotest_common.sh@10 -- # set +x 00:05:10.871 22:25:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.871 22:25:11 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:10.871 22:25:11 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:10.871 22:25:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.871 22:25:11 -- common/autotest_common.sh@10 -- # set +x 00:05:10.871 22:25:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.871 22:25:11 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:10.871 { 00:05:10.871 "aliases": [ 00:05:10.871 "7aded2aa-bae4-47b9-89a8-39bf7a92a5ed" 00:05:10.871 ], 00:05:10.871 "assigned_rate_limits": { 00:05:10.871 "r_mbytes_per_sec": 0, 00:05:10.871 "rw_ios_per_sec": 0, 00:05:10.871 "rw_mbytes_per_sec": 0, 00:05:10.871 "w_mbytes_per_sec": 0 00:05:10.871 }, 00:05:10.871 "block_size": 512, 00:05:10.871 "claimed": false, 00:05:10.871 "driver_specific": {}, 00:05:10.871 "memory_domains": [ 00:05:10.871 { 00:05:10.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.871 "dma_device_type": 2 00:05:10.871 } 00:05:10.871 ], 00:05:10.871 "name": "Malloc3", 00:05:10.871 "num_blocks": 16384, 00:05:10.871 "product_name": "Malloc disk", 00:05:10.871 "supported_io_types": { 00:05:10.871 "abort": true, 00:05:10.871 "compare": false, 00:05:10.871 "compare_and_write": false, 00:05:10.871 "flush": true, 00:05:10.871 "nvme_admin": false, 00:05:10.871 "nvme_io": false, 00:05:10.871 "read": true, 00:05:10.871 "reset": true, 00:05:10.871 "unmap": true, 00:05:10.871 "write": true, 00:05:10.871 "write_zeroes": true 00:05:10.871 }, 00:05:10.871 "uuid": "7aded2aa-bae4-47b9-89a8-39bf7a92a5ed", 00:05:10.871 "zoned": false 00:05:10.871 } 00:05:10.871 ]' 00:05:10.871 22:25:11 -- rpc/rpc.sh@17 -- # jq length 00:05:10.871 22:25:11 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:10.871 22:25:11 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:10.871 22:25:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.871 22:25:11 -- common/autotest_common.sh@10 -- # set +x 00:05:10.871 [2024-11-20 22:25:11.494779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:10.871 [2024-11-20 22:25:11.494814] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:10.871 [2024-11-20 22:25:11.494828] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xeb91d0 00:05:10.871 [2024-11-20 22:25:11.494836] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:10.871 [2024-11-20 22:25:11.495973] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:10.871 [2024-11-20 22:25:11.495995] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:10.871 Passthru0 00:05:10.871 22:25:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.871 22:25:11 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:10.871 22:25:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.871 22:25:11 -- common/autotest_common.sh@10 -- # set +x 00:05:10.871 22:25:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.871 22:25:11 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:10.871 { 00:05:10.871 "aliases": [ 00:05:10.871 "7aded2aa-bae4-47b9-89a8-39bf7a92a5ed" 00:05:10.871 ], 00:05:10.871 "assigned_rate_limits": { 00:05:10.871 "r_mbytes_per_sec": 0, 00:05:10.871 "rw_ios_per_sec": 0, 00:05:10.871 "rw_mbytes_per_sec": 0, 00:05:10.871 "w_mbytes_per_sec": 0 00:05:10.871 }, 00:05:10.871 "block_size": 512, 00:05:10.871 "claim_type": "exclusive_write", 00:05:10.871 "claimed": true, 00:05:10.871 "driver_specific": {}, 00:05:10.871 "memory_domains": [ 00:05:10.871 { 00:05:10.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.871 "dma_device_type": 2 00:05:10.871 } 00:05:10.871 ], 00:05:10.871 "name": "Malloc3", 00:05:10.871 "num_blocks": 16384, 00:05:10.871 "product_name": "Malloc disk", 00:05:10.871 "supported_io_types": { 00:05:10.871 "abort": true, 00:05:10.871 "compare": false, 00:05:10.871 "compare_and_write": false, 00:05:10.871 "flush": true, 00:05:10.871 "nvme_admin": false, 00:05:10.871 "nvme_io": false, 00:05:10.871 "read": true, 00:05:10.871 "reset": true, 00:05:10.871 "unmap": true, 00:05:10.871 "write": true, 00:05:10.871 "write_zeroes": true 00:05:10.871 }, 00:05:10.871 "uuid": "7aded2aa-bae4-47b9-89a8-39bf7a92a5ed", 00:05:10.871 "zoned": false 00:05:10.871 }, 00:05:10.871 { 00:05:10.871 "aliases": [ 00:05:10.871 "60a1613b-060b-5c1b-a139-016e69199ae8" 00:05:10.871 ], 00:05:10.871 "assigned_rate_limits": { 00:05:10.871 "r_mbytes_per_sec": 0, 00:05:10.871 "rw_ios_per_sec": 0, 00:05:10.871 "rw_mbytes_per_sec": 0, 00:05:10.871 "w_mbytes_per_sec": 0 00:05:10.871 }, 00:05:10.871 "block_size": 512, 00:05:10.871 "claimed": false, 00:05:10.871 "driver_specific": { 00:05:10.871 "passthru": { 00:05:10.871 "base_bdev_name": "Malloc3", 00:05:10.871 "name": "Passthru0" 00:05:10.871 } 00:05:10.871 }, 00:05:10.871 "memory_domains": [ 00:05:10.871 { 00:05:10.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.871 "dma_device_type": 2 00:05:10.871 } 00:05:10.871 ], 00:05:10.871 "name": "Passthru0", 00:05:10.871 "num_blocks": 16384, 00:05:10.871 "product_name": "passthru", 00:05:10.871 "supported_io_types": { 00:05:10.871 "abort": true, 00:05:10.871 "compare": false, 00:05:10.871 "compare_and_write": false, 00:05:10.871 "flush": true, 00:05:10.871 "nvme_admin": false, 00:05:10.871 "nvme_io": false, 00:05:10.871 "read": true, 00:05:10.871 "reset": true, 00:05:10.871 "unmap": true, 00:05:10.871 "write": true, 00:05:10.871 "write_zeroes": true 00:05:10.871 }, 00:05:10.871 "uuid": "60a1613b-060b-5c1b-a139-016e69199ae8", 00:05:10.871 "zoned": false 00:05:10.871 } 00:05:10.871 ]' 00:05:10.871 22:25:11 -- rpc/rpc.sh@21 -- # jq length 00:05:10.871 22:25:11 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:10.871 22:25:11 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:10.871 22:25:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.871 22:25:11 -- common/autotest_common.sh@10 -- # set +x 00:05:10.871 22:25:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.871 22:25:11 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:10.871 22:25:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.871 22:25:11 -- common/autotest_common.sh@10 -- # set +x 00:05:10.871 22:25:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.871 22:25:11 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:10.871 22:25:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.871 22:25:11 -- common/autotest_common.sh@10 -- # set +x 00:05:11.130 22:25:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.131 22:25:11 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:11.131 22:25:11 -- rpc/rpc.sh@26 -- # jq length 00:05:11.131 22:25:11 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:11.131 00:05:11.131 real 0m0.322s 00:05:11.131 user 0m0.214s 00:05:11.131 sys 0m0.039s 00:05:11.131 22:25:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.131 ************************************ 00:05:11.131 22:25:11 -- common/autotest_common.sh@10 -- # set +x 00:05:11.131 END TEST rpc_daemon_integrity 00:05:11.131 ************************************ 00:05:11.131 22:25:11 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:11.131 22:25:11 -- rpc/rpc.sh@84 -- # killprocess 67298 00:05:11.131 22:25:11 -- common/autotest_common.sh@936 -- # '[' -z 67298 ']' 00:05:11.131 22:25:11 -- common/autotest_common.sh@940 -- # kill -0 67298 00:05:11.131 22:25:11 -- common/autotest_common.sh@941 -- # uname 00:05:11.131 22:25:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:11.131 22:25:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67298 00:05:11.131 22:25:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:11.131 22:25:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:11.131 killing process with pid 67298 00:05:11.131 22:25:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67298' 00:05:11.131 22:25:11 -- common/autotest_common.sh@955 -- # kill 67298 00:05:11.131 22:25:11 -- common/autotest_common.sh@960 -- # wait 67298 00:05:11.389 00:05:11.389 real 0m3.207s 00:05:11.389 user 0m4.210s 00:05:11.389 sys 0m0.798s 00:05:11.390 22:25:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.390 22:25:12 -- common/autotest_common.sh@10 -- # set +x 00:05:11.390 ************************************ 00:05:11.390 END TEST rpc 00:05:11.390 ************************************ 00:05:11.649 22:25:12 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:11.649 22:25:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.649 22:25:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.649 22:25:12 -- common/autotest_common.sh@10 -- # set +x 00:05:11.649 ************************************ 00:05:11.649 START TEST rpc_client 00:05:11.649 ************************************ 00:05:11.649 22:25:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:11.649 * Looking for test storage... 00:05:11.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:11.649 22:25:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:11.649 22:25:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:11.649 22:25:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:11.649 22:25:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:11.649 22:25:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:11.649 22:25:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:11.649 22:25:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:11.649 22:25:12 -- scripts/common.sh@335 -- # IFS=.-: 00:05:11.649 22:25:12 -- scripts/common.sh@335 -- # read -ra ver1 00:05:11.649 22:25:12 -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.649 22:25:12 -- scripts/common.sh@336 -- # read -ra ver2 00:05:11.649 22:25:12 -- scripts/common.sh@337 -- # local 'op=<' 00:05:11.649 22:25:12 -- scripts/common.sh@339 -- # ver1_l=2 00:05:11.649 22:25:12 -- scripts/common.sh@340 -- # ver2_l=1 00:05:11.649 22:25:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:11.649 22:25:12 -- scripts/common.sh@343 -- # case "$op" in 00:05:11.649 22:25:12 -- scripts/common.sh@344 -- # : 1 00:05:11.649 22:25:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:11.649 22:25:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.649 22:25:12 -- scripts/common.sh@364 -- # decimal 1 00:05:11.649 22:25:12 -- scripts/common.sh@352 -- # local d=1 00:05:11.649 22:25:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.649 22:25:12 -- scripts/common.sh@354 -- # echo 1 00:05:11.649 22:25:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:11.649 22:25:12 -- scripts/common.sh@365 -- # decimal 2 00:05:11.649 22:25:12 -- scripts/common.sh@352 -- # local d=2 00:05:11.649 22:25:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.649 22:25:12 -- scripts/common.sh@354 -- # echo 2 00:05:11.649 22:25:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:11.649 22:25:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:11.649 22:25:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:11.649 22:25:12 -- scripts/common.sh@367 -- # return 0 00:05:11.649 22:25:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.649 22:25:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:11.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.649 --rc genhtml_branch_coverage=1 00:05:11.649 --rc genhtml_function_coverage=1 00:05:11.649 --rc genhtml_legend=1 00:05:11.649 --rc geninfo_all_blocks=1 00:05:11.649 --rc geninfo_unexecuted_blocks=1 00:05:11.649 00:05:11.649 ' 00:05:11.649 22:25:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:11.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.649 --rc genhtml_branch_coverage=1 00:05:11.649 --rc genhtml_function_coverage=1 00:05:11.649 --rc genhtml_legend=1 00:05:11.649 --rc geninfo_all_blocks=1 00:05:11.649 --rc geninfo_unexecuted_blocks=1 00:05:11.649 00:05:11.649 ' 00:05:11.649 22:25:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:11.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.649 --rc genhtml_branch_coverage=1 00:05:11.649 --rc genhtml_function_coverage=1 00:05:11.649 --rc genhtml_legend=1 00:05:11.649 --rc geninfo_all_blocks=1 00:05:11.649 --rc geninfo_unexecuted_blocks=1 00:05:11.649 00:05:11.649 ' 00:05:11.649 22:25:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:11.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.649 --rc genhtml_branch_coverage=1 00:05:11.649 --rc genhtml_function_coverage=1 00:05:11.649 --rc genhtml_legend=1 00:05:11.649 --rc geninfo_all_blocks=1 00:05:11.649 --rc geninfo_unexecuted_blocks=1 00:05:11.649 00:05:11.649 ' 00:05:11.649 22:25:12 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:11.649 OK 00:05:11.649 22:25:12 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:11.649 00:05:11.649 real 0m0.204s 00:05:11.649 user 0m0.126s 00:05:11.649 sys 0m0.090s 00:05:11.649 22:25:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.649 22:25:12 -- common/autotest_common.sh@10 -- # set +x 00:05:11.649 ************************************ 00:05:11.649 END TEST rpc_client 00:05:11.649 ************************************ 00:05:11.649 22:25:12 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:11.649 22:25:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.649 22:25:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.909 22:25:12 -- common/autotest_common.sh@10 -- # set +x 00:05:11.909 ************************************ 00:05:11.909 START TEST json_config 00:05:11.909 ************************************ 00:05:11.910 22:25:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:11.910 22:25:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:11.910 22:25:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:11.910 22:25:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:11.910 22:25:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:11.910 22:25:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:11.910 22:25:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:11.910 22:25:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:11.910 22:25:12 -- scripts/common.sh@335 -- # IFS=.-: 00:05:11.910 22:25:12 -- scripts/common.sh@335 -- # read -ra ver1 00:05:11.910 22:25:12 -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.910 22:25:12 -- scripts/common.sh@336 -- # read -ra ver2 00:05:11.910 22:25:12 -- scripts/common.sh@337 -- # local 'op=<' 00:05:11.910 22:25:12 -- scripts/common.sh@339 -- # ver1_l=2 00:05:11.910 22:25:12 -- scripts/common.sh@340 -- # ver2_l=1 00:05:11.910 22:25:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:11.910 22:25:12 -- scripts/common.sh@343 -- # case "$op" in 00:05:11.910 22:25:12 -- scripts/common.sh@344 -- # : 1 00:05:11.910 22:25:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:11.910 22:25:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.910 22:25:12 -- scripts/common.sh@364 -- # decimal 1 00:05:11.910 22:25:12 -- scripts/common.sh@352 -- # local d=1 00:05:11.910 22:25:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.910 22:25:12 -- scripts/common.sh@354 -- # echo 1 00:05:11.910 22:25:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:11.910 22:25:12 -- scripts/common.sh@365 -- # decimal 2 00:05:11.910 22:25:12 -- scripts/common.sh@352 -- # local d=2 00:05:11.910 22:25:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.910 22:25:12 -- scripts/common.sh@354 -- # echo 2 00:05:11.910 22:25:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:11.910 22:25:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:11.910 22:25:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:11.910 22:25:12 -- scripts/common.sh@367 -- # return 0 00:05:11.910 22:25:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.910 22:25:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:11.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.910 --rc genhtml_branch_coverage=1 00:05:11.910 --rc genhtml_function_coverage=1 00:05:11.910 --rc genhtml_legend=1 00:05:11.910 --rc geninfo_all_blocks=1 00:05:11.910 --rc geninfo_unexecuted_blocks=1 00:05:11.910 00:05:11.910 ' 00:05:11.910 22:25:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:11.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.910 --rc genhtml_branch_coverage=1 00:05:11.910 --rc genhtml_function_coverage=1 00:05:11.910 --rc genhtml_legend=1 00:05:11.910 --rc geninfo_all_blocks=1 00:05:11.910 --rc geninfo_unexecuted_blocks=1 00:05:11.910 00:05:11.910 ' 00:05:11.910 22:25:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:11.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.910 --rc genhtml_branch_coverage=1 00:05:11.910 --rc genhtml_function_coverage=1 00:05:11.910 --rc genhtml_legend=1 00:05:11.910 --rc geninfo_all_blocks=1 00:05:11.910 --rc geninfo_unexecuted_blocks=1 00:05:11.910 00:05:11.910 ' 00:05:11.910 22:25:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:11.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.910 --rc genhtml_branch_coverage=1 00:05:11.910 --rc genhtml_function_coverage=1 00:05:11.910 --rc genhtml_legend=1 00:05:11.910 --rc geninfo_all_blocks=1 00:05:11.910 --rc geninfo_unexecuted_blocks=1 00:05:11.910 00:05:11.910 ' 00:05:11.910 22:25:12 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:11.910 22:25:12 -- nvmf/common.sh@7 -- # uname -s 00:05:11.910 22:25:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:11.910 22:25:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:11.910 22:25:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:11.910 22:25:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:11.910 22:25:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:11.910 22:25:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:11.910 22:25:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:11.910 22:25:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:11.910 22:25:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:11.910 22:25:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:11.910 22:25:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:05:11.910 22:25:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:05:11.910 22:25:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:11.910 22:25:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:11.910 22:25:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:11.910 22:25:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:11.910 22:25:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:11.910 22:25:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:11.910 22:25:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:11.910 22:25:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.910 22:25:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.910 22:25:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.910 22:25:12 -- paths/export.sh@5 -- # export PATH 00:05:11.910 22:25:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.910 22:25:12 -- nvmf/common.sh@46 -- # : 0 00:05:11.910 22:25:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:11.910 22:25:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:11.910 22:25:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:11.910 22:25:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:11.910 22:25:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:11.910 22:25:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:11.910 22:25:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:11.910 22:25:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:11.910 22:25:12 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:11.910 22:25:12 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:11.910 22:25:12 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:11.910 22:25:12 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:11.910 22:25:12 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:11.910 22:25:12 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:11.910 22:25:12 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:11.910 22:25:12 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:11.910 22:25:12 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:11.910 22:25:12 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:11.910 22:25:12 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:11.910 22:25:12 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:11.910 22:25:12 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:11.910 22:25:12 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:11.910 22:25:12 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:11.910 INFO: JSON configuration test init 00:05:11.910 22:25:12 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:11.910 22:25:12 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:11.910 22:25:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:11.910 22:25:12 -- common/autotest_common.sh@10 -- # set +x 00:05:11.910 22:25:12 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:11.910 22:25:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:11.910 22:25:12 -- common/autotest_common.sh@10 -- # set +x 00:05:11.910 Waiting for target to run... 00:05:11.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:11.911 22:25:12 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:11.911 22:25:12 -- json_config/json_config.sh@98 -- # local app=target 00:05:11.911 22:25:12 -- json_config/json_config.sh@99 -- # shift 00:05:11.911 22:25:12 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:11.911 22:25:12 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:11.911 22:25:12 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:11.911 22:25:12 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:11.911 22:25:12 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:11.911 22:25:12 -- json_config/json_config.sh@111 -- # app_pid[$app]=67619 00:05:11.911 22:25:12 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:11.911 22:25:12 -- json_config/json_config.sh@114 -- # waitforlisten 67619 /var/tmp/spdk_tgt.sock 00:05:11.911 22:25:12 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:11.911 22:25:12 -- common/autotest_common.sh@829 -- # '[' -z 67619 ']' 00:05:11.911 22:25:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:11.911 22:25:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.911 22:25:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:11.911 22:25:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.911 22:25:12 -- common/autotest_common.sh@10 -- # set +x 00:05:12.170 [2024-11-20 22:25:12.656735] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:12.170 [2024-11-20 22:25:12.657016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67619 ] 00:05:12.737 [2024-11-20 22:25:13.235266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.737 [2024-11-20 22:25:13.317423] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:12.737 [2024-11-20 22:25:13.317914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.995 22:25:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.995 22:25:13 -- common/autotest_common.sh@862 -- # return 0 00:05:12.995 22:25:13 -- json_config/json_config.sh@115 -- # echo '' 00:05:12.995 00:05:12.995 22:25:13 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:12.995 22:25:13 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:12.995 22:25:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:12.996 22:25:13 -- common/autotest_common.sh@10 -- # set +x 00:05:12.996 22:25:13 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:12.996 22:25:13 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:12.996 22:25:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:12.996 22:25:13 -- common/autotest_common.sh@10 -- # set +x 00:05:12.996 22:25:13 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:12.996 22:25:13 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:12.996 22:25:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:13.563 22:25:14 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:13.563 22:25:14 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:13.563 22:25:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:13.563 22:25:14 -- common/autotest_common.sh@10 -- # set +x 00:05:13.563 22:25:14 -- json_config/json_config.sh@48 -- # local ret=0 00:05:13.563 22:25:14 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:13.563 22:25:14 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:13.563 22:25:14 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:13.563 22:25:14 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:13.563 22:25:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:13.822 22:25:14 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:13.822 22:25:14 -- json_config/json_config.sh@51 -- # local get_types 00:05:13.822 22:25:14 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:13.822 22:25:14 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:13.822 22:25:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:13.822 22:25:14 -- common/autotest_common.sh@10 -- # set +x 00:05:13.822 22:25:14 -- json_config/json_config.sh@58 -- # return 0 00:05:13.822 22:25:14 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:13.822 22:25:14 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:13.822 22:25:14 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:13.822 22:25:14 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:13.822 22:25:14 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:13.822 22:25:14 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:13.822 22:25:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:13.822 22:25:14 -- common/autotest_common.sh@10 -- # set +x 00:05:13.822 22:25:14 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:13.822 22:25:14 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:13.822 22:25:14 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:13.822 22:25:14 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:13.822 22:25:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:14.080 MallocForNvmf0 00:05:14.080 22:25:14 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:14.080 22:25:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:14.339 MallocForNvmf1 00:05:14.339 22:25:14 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:14.339 22:25:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:14.597 [2024-11-20 22:25:15.243224] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:14.597 22:25:15 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:14.597 22:25:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:14.855 22:25:15 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:14.855 22:25:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:15.114 22:25:15 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:15.114 22:25:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:15.373 22:25:15 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:15.373 22:25:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:15.373 [2024-11-20 22:25:16.103780] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:15.632 22:25:16 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:15.632 22:25:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:15.632 22:25:16 -- common/autotest_common.sh@10 -- # set +x 00:05:15.632 22:25:16 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:15.632 22:25:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:15.632 22:25:16 -- common/autotest_common.sh@10 -- # set +x 00:05:15.632 22:25:16 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:15.632 22:25:16 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:15.632 22:25:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:15.891 MallocBdevForConfigChangeCheck 00:05:15.891 22:25:16 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:15.891 22:25:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:15.891 22:25:16 -- common/autotest_common.sh@10 -- # set +x 00:05:15.891 22:25:16 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:15.891 22:25:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:16.149 INFO: shutting down applications... 00:05:16.149 22:25:16 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:16.149 22:25:16 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:16.149 22:25:16 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:16.149 22:25:16 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:16.149 22:25:16 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:16.716 Calling clear_iscsi_subsystem 00:05:16.716 Calling clear_nvmf_subsystem 00:05:16.716 Calling clear_nbd_subsystem 00:05:16.716 Calling clear_ublk_subsystem 00:05:16.716 Calling clear_vhost_blk_subsystem 00:05:16.716 Calling clear_vhost_scsi_subsystem 00:05:16.716 Calling clear_scheduler_subsystem 00:05:16.716 Calling clear_bdev_subsystem 00:05:16.716 Calling clear_accel_subsystem 00:05:16.716 Calling clear_vmd_subsystem 00:05:16.716 Calling clear_sock_subsystem 00:05:16.716 Calling clear_iobuf_subsystem 00:05:16.716 22:25:17 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:16.716 22:25:17 -- json_config/json_config.sh@396 -- # count=100 00:05:16.716 22:25:17 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:16.716 22:25:17 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:16.716 22:25:17 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:16.716 22:25:17 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:16.975 22:25:17 -- json_config/json_config.sh@398 -- # break 00:05:16.975 22:25:17 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:16.975 22:25:17 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:16.975 22:25:17 -- json_config/json_config.sh@120 -- # local app=target 00:05:16.975 22:25:17 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:16.975 22:25:17 -- json_config/json_config.sh@124 -- # [[ -n 67619 ]] 00:05:16.975 22:25:17 -- json_config/json_config.sh@127 -- # kill -SIGINT 67619 00:05:16.975 22:25:17 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:16.975 22:25:17 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:16.975 22:25:17 -- json_config/json_config.sh@130 -- # kill -0 67619 00:05:16.975 22:25:17 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:17.542 22:25:18 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:17.542 22:25:18 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:17.542 22:25:18 -- json_config/json_config.sh@130 -- # kill -0 67619 00:05:17.542 22:25:18 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:17.542 22:25:18 -- json_config/json_config.sh@132 -- # break 00:05:17.542 22:25:18 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:17.542 SPDK target shutdown done 00:05:17.542 22:25:18 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:17.542 INFO: relaunching applications... 00:05:17.542 22:25:18 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:17.542 22:25:18 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:17.542 22:25:18 -- json_config/json_config.sh@98 -- # local app=target 00:05:17.542 22:25:18 -- json_config/json_config.sh@99 -- # shift 00:05:17.543 22:25:18 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:17.543 22:25:18 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:17.543 22:25:18 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:17.543 22:25:18 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:17.543 22:25:18 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:17.543 22:25:18 -- json_config/json_config.sh@111 -- # app_pid[$app]=67894 00:05:17.543 Waiting for target to run... 00:05:17.543 22:25:18 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:17.543 22:25:18 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:17.543 22:25:18 -- json_config/json_config.sh@114 -- # waitforlisten 67894 /var/tmp/spdk_tgt.sock 00:05:17.543 22:25:18 -- common/autotest_common.sh@829 -- # '[' -z 67894 ']' 00:05:17.543 22:25:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:17.543 22:25:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:17.543 22:25:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:17.543 22:25:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.543 22:25:18 -- common/autotest_common.sh@10 -- # set +x 00:05:17.543 [2024-11-20 22:25:18.081813] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:17.543 [2024-11-20 22:25:18.081920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67894 ] 00:05:18.110 [2024-11-20 22:25:18.590460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.110 [2024-11-20 22:25:18.672393] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:18.110 [2024-11-20 22:25:18.672569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.369 [2024-11-20 22:25:18.973374] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:18.369 [2024-11-20 22:25:19.005452] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:18.369 22:25:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.369 00:05:18.369 22:25:19 -- common/autotest_common.sh@862 -- # return 0 00:05:18.369 22:25:19 -- json_config/json_config.sh@115 -- # echo '' 00:05:18.369 INFO: Checking if target configuration is the same... 00:05:18.369 22:25:19 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:18.369 22:25:19 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:18.369 22:25:19 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:18.369 22:25:19 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:18.369 22:25:19 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:18.369 + '[' 2 -ne 2 ']' 00:05:18.369 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:18.629 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:18.629 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:18.629 +++ basename /dev/fd/62 00:05:18.629 ++ mktemp /tmp/62.XXX 00:05:18.629 + tmp_file_1=/tmp/62.fPf 00:05:18.629 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:18.629 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:18.629 + tmp_file_2=/tmp/spdk_tgt_config.json.VfA 00:05:18.629 + ret=0 00:05:18.629 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:18.887 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:18.887 + diff -u /tmp/62.fPf /tmp/spdk_tgt_config.json.VfA 00:05:18.887 INFO: JSON config files are the same 00:05:18.887 + echo 'INFO: JSON config files are the same' 00:05:18.887 + rm /tmp/62.fPf /tmp/spdk_tgt_config.json.VfA 00:05:18.887 + exit 0 00:05:18.887 22:25:19 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:18.887 INFO: changing configuration and checking if this can be detected... 00:05:18.887 22:25:19 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:18.887 22:25:19 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:18.887 22:25:19 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:19.146 22:25:19 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:19.146 22:25:19 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:19.146 22:25:19 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.146 + '[' 2 -ne 2 ']' 00:05:19.146 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:19.146 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:19.146 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:19.146 +++ basename /dev/fd/62 00:05:19.146 ++ mktemp /tmp/62.XXX 00:05:19.146 + tmp_file_1=/tmp/62.NvZ 00:05:19.146 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:19.146 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:19.146 + tmp_file_2=/tmp/spdk_tgt_config.json.Dr3 00:05:19.146 + ret=0 00:05:19.146 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:19.405 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:19.664 + diff -u /tmp/62.NvZ /tmp/spdk_tgt_config.json.Dr3 00:05:19.664 + ret=1 00:05:19.664 + echo '=== Start of file: /tmp/62.NvZ ===' 00:05:19.664 + cat /tmp/62.NvZ 00:05:19.664 + echo '=== End of file: /tmp/62.NvZ ===' 00:05:19.664 + echo '' 00:05:19.664 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Dr3 ===' 00:05:19.664 + cat /tmp/spdk_tgt_config.json.Dr3 00:05:19.664 + echo '=== End of file: /tmp/spdk_tgt_config.json.Dr3 ===' 00:05:19.664 + echo '' 00:05:19.664 + rm /tmp/62.NvZ /tmp/spdk_tgt_config.json.Dr3 00:05:19.664 + exit 1 00:05:19.664 INFO: configuration change detected. 00:05:19.664 22:25:20 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:19.664 22:25:20 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:19.664 22:25:20 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:19.664 22:25:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:19.664 22:25:20 -- common/autotest_common.sh@10 -- # set +x 00:05:19.664 22:25:20 -- json_config/json_config.sh@360 -- # local ret=0 00:05:19.664 22:25:20 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:19.664 22:25:20 -- json_config/json_config.sh@370 -- # [[ -n 67894 ]] 00:05:19.664 22:25:20 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:19.664 22:25:20 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:19.664 22:25:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:19.664 22:25:20 -- common/autotest_common.sh@10 -- # set +x 00:05:19.664 22:25:20 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:19.664 22:25:20 -- json_config/json_config.sh@246 -- # uname -s 00:05:19.664 22:25:20 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:19.664 22:25:20 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:19.664 22:25:20 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:19.664 22:25:20 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:19.664 22:25:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:19.664 22:25:20 -- common/autotest_common.sh@10 -- # set +x 00:05:19.664 22:25:20 -- json_config/json_config.sh@376 -- # killprocess 67894 00:05:19.664 22:25:20 -- common/autotest_common.sh@936 -- # '[' -z 67894 ']' 00:05:19.664 22:25:20 -- common/autotest_common.sh@940 -- # kill -0 67894 00:05:19.664 22:25:20 -- common/autotest_common.sh@941 -- # uname 00:05:19.664 22:25:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:19.664 22:25:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67894 00:05:19.664 22:25:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:19.664 killing process with pid 67894 00:05:19.664 22:25:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:19.664 22:25:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67894' 00:05:19.664 22:25:20 -- common/autotest_common.sh@955 -- # kill 67894 00:05:19.664 22:25:20 -- common/autotest_common.sh@960 -- # wait 67894 00:05:19.923 22:25:20 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:19.923 22:25:20 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:19.923 22:25:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:19.923 22:25:20 -- common/autotest_common.sh@10 -- # set +x 00:05:19.923 22:25:20 -- json_config/json_config.sh@381 -- # return 0 00:05:19.923 INFO: Success 00:05:19.923 22:25:20 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:19.923 00:05:19.923 real 0m8.121s 00:05:19.923 user 0m11.205s 00:05:19.923 sys 0m2.054s 00:05:19.923 22:25:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.923 ************************************ 00:05:19.923 END TEST json_config 00:05:19.923 ************************************ 00:05:19.923 22:25:20 -- common/autotest_common.sh@10 -- # set +x 00:05:19.923 22:25:20 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:19.923 22:25:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.923 22:25:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.923 22:25:20 -- common/autotest_common.sh@10 -- # set +x 00:05:19.923 ************************************ 00:05:19.923 START TEST json_config_extra_key 00:05:19.923 ************************************ 00:05:19.923 22:25:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:19.923 22:25:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:19.923 22:25:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:19.923 22:25:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:20.181 22:25:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:20.181 22:25:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:20.181 22:25:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:20.181 22:25:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:20.181 22:25:20 -- scripts/common.sh@335 -- # IFS=.-: 00:05:20.181 22:25:20 -- scripts/common.sh@335 -- # read -ra ver1 00:05:20.181 22:25:20 -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.182 22:25:20 -- scripts/common.sh@336 -- # read -ra ver2 00:05:20.182 22:25:20 -- scripts/common.sh@337 -- # local 'op=<' 00:05:20.182 22:25:20 -- scripts/common.sh@339 -- # ver1_l=2 00:05:20.182 22:25:20 -- scripts/common.sh@340 -- # ver2_l=1 00:05:20.182 22:25:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:20.182 22:25:20 -- scripts/common.sh@343 -- # case "$op" in 00:05:20.182 22:25:20 -- scripts/common.sh@344 -- # : 1 00:05:20.182 22:25:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:20.182 22:25:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.182 22:25:20 -- scripts/common.sh@364 -- # decimal 1 00:05:20.182 22:25:20 -- scripts/common.sh@352 -- # local d=1 00:05:20.182 22:25:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.182 22:25:20 -- scripts/common.sh@354 -- # echo 1 00:05:20.182 22:25:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:20.182 22:25:20 -- scripts/common.sh@365 -- # decimal 2 00:05:20.182 22:25:20 -- scripts/common.sh@352 -- # local d=2 00:05:20.182 22:25:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.182 22:25:20 -- scripts/common.sh@354 -- # echo 2 00:05:20.182 22:25:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:20.182 22:25:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:20.182 22:25:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:20.182 22:25:20 -- scripts/common.sh@367 -- # return 0 00:05:20.182 22:25:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.182 22:25:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:20.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.182 --rc genhtml_branch_coverage=1 00:05:20.182 --rc genhtml_function_coverage=1 00:05:20.182 --rc genhtml_legend=1 00:05:20.182 --rc geninfo_all_blocks=1 00:05:20.182 --rc geninfo_unexecuted_blocks=1 00:05:20.182 00:05:20.182 ' 00:05:20.182 22:25:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:20.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.182 --rc genhtml_branch_coverage=1 00:05:20.182 --rc genhtml_function_coverage=1 00:05:20.182 --rc genhtml_legend=1 00:05:20.182 --rc geninfo_all_blocks=1 00:05:20.182 --rc geninfo_unexecuted_blocks=1 00:05:20.182 00:05:20.182 ' 00:05:20.182 22:25:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:20.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.182 --rc genhtml_branch_coverage=1 00:05:20.182 --rc genhtml_function_coverage=1 00:05:20.182 --rc genhtml_legend=1 00:05:20.182 --rc geninfo_all_blocks=1 00:05:20.182 --rc geninfo_unexecuted_blocks=1 00:05:20.182 00:05:20.182 ' 00:05:20.182 22:25:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:20.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.182 --rc genhtml_branch_coverage=1 00:05:20.182 --rc genhtml_function_coverage=1 00:05:20.182 --rc genhtml_legend=1 00:05:20.182 --rc geninfo_all_blocks=1 00:05:20.182 --rc geninfo_unexecuted_blocks=1 00:05:20.182 00:05:20.182 ' 00:05:20.182 22:25:20 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:20.182 22:25:20 -- nvmf/common.sh@7 -- # uname -s 00:05:20.182 22:25:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:20.182 22:25:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:20.182 22:25:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:20.182 22:25:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:20.182 22:25:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:20.182 22:25:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:20.182 22:25:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:20.182 22:25:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:20.182 22:25:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:20.182 22:25:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:20.182 22:25:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:05:20.182 22:25:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:05:20.182 22:25:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:20.182 22:25:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:20.182 22:25:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:20.182 22:25:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:20.182 22:25:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:20.182 22:25:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:20.182 22:25:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:20.182 22:25:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.182 22:25:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.182 22:25:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.182 22:25:20 -- paths/export.sh@5 -- # export PATH 00:05:20.182 22:25:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.182 22:25:20 -- nvmf/common.sh@46 -- # : 0 00:05:20.182 22:25:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:20.182 22:25:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:20.182 22:25:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:20.182 22:25:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:20.182 22:25:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:20.182 22:25:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:20.182 22:25:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:20.182 22:25:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:20.182 22:25:20 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:20.182 22:25:20 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:20.182 22:25:20 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:20.182 22:25:20 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:20.182 22:25:20 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:20.182 22:25:20 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:20.182 22:25:20 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:20.182 22:25:20 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:20.182 22:25:20 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:20.182 INFO: launching applications... 00:05:20.182 22:25:20 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:20.182 22:25:20 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:20.182 22:25:20 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:20.182 22:25:20 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:20.182 22:25:20 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:20.182 22:25:20 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:20.182 22:25:20 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=68066 00:05:20.182 Waiting for target to run... 00:05:20.182 22:25:20 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:20.182 22:25:20 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 68066 /var/tmp/spdk_tgt.sock 00:05:20.182 22:25:20 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:20.182 22:25:20 -- common/autotest_common.sh@829 -- # '[' -z 68066 ']' 00:05:20.182 22:25:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:20.182 22:25:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.182 22:25:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:20.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:20.182 22:25:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.182 22:25:20 -- common/autotest_common.sh@10 -- # set +x 00:05:20.182 [2024-11-20 22:25:20.790397] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:20.182 [2024-11-20 22:25:20.790508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68066 ] 00:05:20.749 [2024-11-20 22:25:21.225580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.749 [2024-11-20 22:25:21.274144] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:20.749 [2024-11-20 22:25:21.274296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.007 22:25:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.007 22:25:21 -- common/autotest_common.sh@862 -- # return 0 00:05:21.007 00:05:21.007 22:25:21 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:21.007 INFO: shutting down applications... 00:05:21.007 22:25:21 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:21.007 22:25:21 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:21.007 22:25:21 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:21.007 22:25:21 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:21.008 22:25:21 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 68066 ]] 00:05:21.008 22:25:21 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 68066 00:05:21.008 22:25:21 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:21.008 22:25:21 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:21.008 22:25:21 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68066 00:05:21.008 22:25:21 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:21.576 22:25:22 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:21.576 22:25:22 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:21.576 22:25:22 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68066 00:05:21.576 22:25:22 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:21.576 22:25:22 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:21.576 SPDK target shutdown done 00:05:21.576 22:25:22 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:21.576 22:25:22 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:21.576 Success 00:05:21.576 22:25:22 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:21.576 00:05:21.576 real 0m1.584s 00:05:21.576 user 0m1.295s 00:05:21.576 sys 0m0.459s 00:05:21.576 22:25:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.576 22:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:21.576 ************************************ 00:05:21.576 END TEST json_config_extra_key 00:05:21.576 ************************************ 00:05:21.576 22:25:22 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:21.576 22:25:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.576 22:25:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.576 22:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:21.576 ************************************ 00:05:21.576 START TEST alias_rpc 00:05:21.576 ************************************ 00:05:21.576 22:25:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:21.576 * Looking for test storage... 00:05:21.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:21.576 22:25:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:21.576 22:25:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:21.576 22:25:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:21.835 22:25:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:21.835 22:25:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:21.835 22:25:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:21.835 22:25:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:21.835 22:25:22 -- scripts/common.sh@335 -- # IFS=.-: 00:05:21.835 22:25:22 -- scripts/common.sh@335 -- # read -ra ver1 00:05:21.835 22:25:22 -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.835 22:25:22 -- scripts/common.sh@336 -- # read -ra ver2 00:05:21.835 22:25:22 -- scripts/common.sh@337 -- # local 'op=<' 00:05:21.835 22:25:22 -- scripts/common.sh@339 -- # ver1_l=2 00:05:21.835 22:25:22 -- scripts/common.sh@340 -- # ver2_l=1 00:05:21.835 22:25:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:21.835 22:25:22 -- scripts/common.sh@343 -- # case "$op" in 00:05:21.835 22:25:22 -- scripts/common.sh@344 -- # : 1 00:05:21.835 22:25:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:21.835 22:25:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.835 22:25:22 -- scripts/common.sh@364 -- # decimal 1 00:05:21.835 22:25:22 -- scripts/common.sh@352 -- # local d=1 00:05:21.835 22:25:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.835 22:25:22 -- scripts/common.sh@354 -- # echo 1 00:05:21.835 22:25:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:21.835 22:25:22 -- scripts/common.sh@365 -- # decimal 2 00:05:21.835 22:25:22 -- scripts/common.sh@352 -- # local d=2 00:05:21.835 22:25:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.835 22:25:22 -- scripts/common.sh@354 -- # echo 2 00:05:21.835 22:25:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:21.835 22:25:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:21.835 22:25:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:21.835 22:25:22 -- scripts/common.sh@367 -- # return 0 00:05:21.835 22:25:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.835 22:25:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:21.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.836 --rc genhtml_branch_coverage=1 00:05:21.836 --rc genhtml_function_coverage=1 00:05:21.836 --rc genhtml_legend=1 00:05:21.836 --rc geninfo_all_blocks=1 00:05:21.836 --rc geninfo_unexecuted_blocks=1 00:05:21.836 00:05:21.836 ' 00:05:21.836 22:25:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:21.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.836 --rc genhtml_branch_coverage=1 00:05:21.836 --rc genhtml_function_coverage=1 00:05:21.836 --rc genhtml_legend=1 00:05:21.836 --rc geninfo_all_blocks=1 00:05:21.836 --rc geninfo_unexecuted_blocks=1 00:05:21.836 00:05:21.836 ' 00:05:21.836 22:25:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:21.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.836 --rc genhtml_branch_coverage=1 00:05:21.836 --rc genhtml_function_coverage=1 00:05:21.836 --rc genhtml_legend=1 00:05:21.836 --rc geninfo_all_blocks=1 00:05:21.836 --rc geninfo_unexecuted_blocks=1 00:05:21.836 00:05:21.836 ' 00:05:21.836 22:25:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:21.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.836 --rc genhtml_branch_coverage=1 00:05:21.836 --rc genhtml_function_coverage=1 00:05:21.836 --rc genhtml_legend=1 00:05:21.836 --rc geninfo_all_blocks=1 00:05:21.836 --rc geninfo_unexecuted_blocks=1 00:05:21.836 00:05:21.836 ' 00:05:21.836 22:25:22 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:21.836 22:25:22 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68155 00:05:21.836 22:25:22 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68155 00:05:21.836 22:25:22 -- common/autotest_common.sh@829 -- # '[' -z 68155 ']' 00:05:21.836 22:25:22 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:21.836 22:25:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.836 22:25:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.836 22:25:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.836 22:25:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.836 22:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:21.836 [2024-11-20 22:25:22.471538] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:21.836 [2024-11-20 22:25:22.471642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68155 ] 00:05:22.095 [2024-11-20 22:25:22.607087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.095 [2024-11-20 22:25:22.661643] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:22.095 [2024-11-20 22:25:22.661798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.031 22:25:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.031 22:25:23 -- common/autotest_common.sh@862 -- # return 0 00:05:23.031 22:25:23 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:23.031 22:25:23 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68155 00:05:23.031 22:25:23 -- common/autotest_common.sh@936 -- # '[' -z 68155 ']' 00:05:23.031 22:25:23 -- common/autotest_common.sh@940 -- # kill -0 68155 00:05:23.031 22:25:23 -- common/autotest_common.sh@941 -- # uname 00:05:23.031 22:25:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:23.031 22:25:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68155 00:05:23.031 22:25:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:23.031 22:25:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:23.031 killing process with pid 68155 00:05:23.031 22:25:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68155' 00:05:23.031 22:25:23 -- common/autotest_common.sh@955 -- # kill 68155 00:05:23.031 22:25:23 -- common/autotest_common.sh@960 -- # wait 68155 00:05:23.290 00:05:23.290 real 0m1.801s 00:05:23.290 user 0m1.988s 00:05:23.290 sys 0m0.441s 00:05:23.290 22:25:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.290 22:25:24 -- common/autotest_common.sh@10 -- # set +x 00:05:23.290 ************************************ 00:05:23.290 END TEST alias_rpc 00:05:23.290 ************************************ 00:05:23.549 22:25:24 -- spdk/autotest.sh@169 -- # [[ 1 -eq 0 ]] 00:05:23.549 22:25:24 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:23.549 22:25:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.549 22:25:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.549 22:25:24 -- common/autotest_common.sh@10 -- # set +x 00:05:23.549 ************************************ 00:05:23.549 START TEST dpdk_mem_utility 00:05:23.549 ************************************ 00:05:23.549 22:25:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:23.549 * Looking for test storage... 00:05:23.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:23.549 22:25:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:23.549 22:25:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:23.549 22:25:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:23.549 22:25:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:23.549 22:25:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:23.549 22:25:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:23.549 22:25:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:23.549 22:25:24 -- scripts/common.sh@335 -- # IFS=.-: 00:05:23.549 22:25:24 -- scripts/common.sh@335 -- # read -ra ver1 00:05:23.549 22:25:24 -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.549 22:25:24 -- scripts/common.sh@336 -- # read -ra ver2 00:05:23.549 22:25:24 -- scripts/common.sh@337 -- # local 'op=<' 00:05:23.549 22:25:24 -- scripts/common.sh@339 -- # ver1_l=2 00:05:23.549 22:25:24 -- scripts/common.sh@340 -- # ver2_l=1 00:05:23.549 22:25:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:23.549 22:25:24 -- scripts/common.sh@343 -- # case "$op" in 00:05:23.549 22:25:24 -- scripts/common.sh@344 -- # : 1 00:05:23.549 22:25:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:23.549 22:25:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.549 22:25:24 -- scripts/common.sh@364 -- # decimal 1 00:05:23.549 22:25:24 -- scripts/common.sh@352 -- # local d=1 00:05:23.549 22:25:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.549 22:25:24 -- scripts/common.sh@354 -- # echo 1 00:05:23.549 22:25:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:23.549 22:25:24 -- scripts/common.sh@365 -- # decimal 2 00:05:23.549 22:25:24 -- scripts/common.sh@352 -- # local d=2 00:05:23.549 22:25:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.549 22:25:24 -- scripts/common.sh@354 -- # echo 2 00:05:23.549 22:25:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:23.549 22:25:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:23.550 22:25:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:23.550 22:25:24 -- scripts/common.sh@367 -- # return 0 00:05:23.550 22:25:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.550 22:25:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:23.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.550 --rc genhtml_branch_coverage=1 00:05:23.550 --rc genhtml_function_coverage=1 00:05:23.550 --rc genhtml_legend=1 00:05:23.550 --rc geninfo_all_blocks=1 00:05:23.550 --rc geninfo_unexecuted_blocks=1 00:05:23.550 00:05:23.550 ' 00:05:23.550 22:25:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:23.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.550 --rc genhtml_branch_coverage=1 00:05:23.550 --rc genhtml_function_coverage=1 00:05:23.550 --rc genhtml_legend=1 00:05:23.550 --rc geninfo_all_blocks=1 00:05:23.550 --rc geninfo_unexecuted_blocks=1 00:05:23.550 00:05:23.550 ' 00:05:23.550 22:25:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:23.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.550 --rc genhtml_branch_coverage=1 00:05:23.550 --rc genhtml_function_coverage=1 00:05:23.550 --rc genhtml_legend=1 00:05:23.550 --rc geninfo_all_blocks=1 00:05:23.550 --rc geninfo_unexecuted_blocks=1 00:05:23.550 00:05:23.550 ' 00:05:23.550 22:25:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:23.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.550 --rc genhtml_branch_coverage=1 00:05:23.550 --rc genhtml_function_coverage=1 00:05:23.550 --rc genhtml_legend=1 00:05:23.550 --rc geninfo_all_blocks=1 00:05:23.550 --rc geninfo_unexecuted_blocks=1 00:05:23.550 00:05:23.550 ' 00:05:23.550 22:25:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:23.550 22:25:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=68254 00:05:23.550 22:25:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 68254 00:05:23.550 22:25:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.550 22:25:24 -- common/autotest_common.sh@829 -- # '[' -z 68254 ']' 00:05:23.550 22:25:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.550 22:25:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.550 22:25:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.550 22:25:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.550 22:25:24 -- common/autotest_common.sh@10 -- # set +x 00:05:23.808 [2024-11-20 22:25:24.306938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:23.809 [2024-11-20 22:25:24.307047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68254 ] 00:05:23.809 [2024-11-20 22:25:24.435943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.809 [2024-11-20 22:25:24.493651] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:23.809 [2024-11-20 22:25:24.493787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.745 22:25:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.745 22:25:25 -- common/autotest_common.sh@862 -- # return 0 00:05:24.745 22:25:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:24.745 22:25:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:24.745 22:25:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.745 22:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:24.745 { 00:05:24.745 "filename": "/tmp/spdk_mem_dump.txt" 00:05:24.745 } 00:05:24.745 22:25:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.745 22:25:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:24.745 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:24.745 1 heaps totaling size 814.000000 MiB 00:05:24.745 size: 814.000000 MiB heap id: 0 00:05:24.745 end heaps---------- 00:05:24.745 8 mempools totaling size 598.116089 MiB 00:05:24.745 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:24.745 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:24.745 size: 84.521057 MiB name: bdev_io_68254 00:05:24.745 size: 51.011292 MiB name: evtpool_68254 00:05:24.745 size: 50.003479 MiB name: msgpool_68254 00:05:24.745 size: 21.763794 MiB name: PDU_Pool 00:05:24.745 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:24.745 size: 0.026123 MiB name: Session_Pool 00:05:24.745 end mempools------- 00:05:24.745 6 memzones totaling size 4.142822 MiB 00:05:24.745 size: 1.000366 MiB name: RG_ring_0_68254 00:05:24.745 size: 1.000366 MiB name: RG_ring_1_68254 00:05:24.745 size: 1.000366 MiB name: RG_ring_4_68254 00:05:24.745 size: 1.000366 MiB name: RG_ring_5_68254 00:05:24.745 size: 0.125366 MiB name: RG_ring_2_68254 00:05:24.745 size: 0.015991 MiB name: RG_ring_3_68254 00:05:24.745 end memzones------- 00:05:24.745 22:25:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:24.745 heap id: 0 total size: 814.000000 MiB number of busy elements: 224 number of free elements: 15 00:05:24.745 list of free elements. size: 12.485840 MiB 00:05:24.745 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:24.745 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:24.745 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:24.745 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:24.745 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:24.745 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:24.745 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:24.745 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:24.745 element at address: 0x200000200000 with size: 0.837219 MiB 00:05:24.745 element at address: 0x20001aa00000 with size: 0.571899 MiB 00:05:24.745 element at address: 0x20000b200000 with size: 0.489258 MiB 00:05:24.745 element at address: 0x200000800000 with size: 0.486877 MiB 00:05:24.745 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:24.745 element at address: 0x200027e00000 with size: 0.398315 MiB 00:05:24.745 element at address: 0x200003a00000 with size: 0.351501 MiB 00:05:24.745 list of standard malloc elements. size: 199.251587 MiB 00:05:24.745 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:24.745 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:24.745 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:24.745 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:24.745 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:24.745 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:24.745 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:24.745 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:24.745 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:24.745 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:24.745 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:24.745 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:24.745 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:24.745 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:24.745 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:24.745 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:24.745 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:24.745 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:24.745 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:24.745 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:24.745 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:24.745 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:24.746 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:24.746 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:24.746 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:24.746 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e66040 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6cc40 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:24.746 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:24.746 list of memzone associated elements. size: 602.262573 MiB 00:05:24.746 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:24.746 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:24.746 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:24.746 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:24.746 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:24.746 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_68254_0 00:05:24.746 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:24.746 associated memzone info: size: 48.002930 MiB name: MP_evtpool_68254_0 00:05:24.746 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:24.746 associated memzone info: size: 48.002930 MiB name: MP_msgpool_68254_0 00:05:24.746 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:24.746 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:24.746 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:24.746 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:24.746 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:24.746 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_68254 00:05:24.746 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:24.746 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_68254 00:05:24.746 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:24.746 associated memzone info: size: 1.007996 MiB name: MP_evtpool_68254 00:05:24.746 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:24.746 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:24.746 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:24.746 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:24.746 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:24.746 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:24.746 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:24.746 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:24.746 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:24.746 associated memzone info: size: 1.000366 MiB name: RG_ring_0_68254 00:05:24.746 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:24.746 associated memzone info: size: 1.000366 MiB name: RG_ring_1_68254 00:05:24.746 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:24.746 associated memzone info: size: 1.000366 MiB name: RG_ring_4_68254 00:05:24.746 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:24.746 associated memzone info: size: 1.000366 MiB name: RG_ring_5_68254 00:05:24.746 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:24.746 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_68254 00:05:24.746 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:24.746 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:24.746 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:24.746 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:24.746 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:24.746 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:24.746 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:24.746 associated memzone info: size: 0.125366 MiB name: RG_ring_2_68254 00:05:24.746 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:24.746 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:24.746 element at address: 0x200027e66100 with size: 0.023743 MiB 00:05:24.746 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:24.746 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:24.746 associated memzone info: size: 0.015991 MiB name: RG_ring_3_68254 00:05:24.746 element at address: 0x200027e6c240 with size: 0.002441 MiB 00:05:24.746 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:24.746 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:24.746 associated memzone info: size: 0.000183 MiB name: MP_msgpool_68254 00:05:24.746 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:24.746 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_68254 00:05:24.746 element at address: 0x200027e6cd00 with size: 0.000305 MiB 00:05:24.746 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:24.746 22:25:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:24.746 22:25:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 68254 00:05:24.746 22:25:25 -- common/autotest_common.sh@936 -- # '[' -z 68254 ']' 00:05:24.746 22:25:25 -- common/autotest_common.sh@940 -- # kill -0 68254 00:05:24.746 22:25:25 -- common/autotest_common.sh@941 -- # uname 00:05:24.746 22:25:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:24.746 22:25:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68254 00:05:24.746 22:25:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:24.746 killing process with pid 68254 00:05:24.746 22:25:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:24.746 22:25:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68254' 00:05:24.746 22:25:25 -- common/autotest_common.sh@955 -- # kill 68254 00:05:24.746 22:25:25 -- common/autotest_common.sh@960 -- # wait 68254 00:05:25.314 00:05:25.314 real 0m1.681s 00:05:25.314 user 0m1.767s 00:05:25.314 sys 0m0.449s 00:05:25.314 22:25:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:25.314 ************************************ 00:05:25.314 END TEST dpdk_mem_utility 00:05:25.314 ************************************ 00:05:25.314 22:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:25.314 22:25:25 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:25.314 22:25:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:25.314 22:25:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.314 22:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:25.314 ************************************ 00:05:25.314 START TEST event 00:05:25.314 ************************************ 00:05:25.314 22:25:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:25.314 * Looking for test storage... 00:05:25.314 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:25.314 22:25:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:25.314 22:25:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:25.314 22:25:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:25.314 22:25:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:25.314 22:25:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:25.314 22:25:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:25.314 22:25:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:25.314 22:25:25 -- scripts/common.sh@335 -- # IFS=.-: 00:05:25.314 22:25:25 -- scripts/common.sh@335 -- # read -ra ver1 00:05:25.314 22:25:25 -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.314 22:25:25 -- scripts/common.sh@336 -- # read -ra ver2 00:05:25.314 22:25:25 -- scripts/common.sh@337 -- # local 'op=<' 00:05:25.314 22:25:25 -- scripts/common.sh@339 -- # ver1_l=2 00:05:25.314 22:25:25 -- scripts/common.sh@340 -- # ver2_l=1 00:05:25.314 22:25:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:25.314 22:25:25 -- scripts/common.sh@343 -- # case "$op" in 00:05:25.314 22:25:25 -- scripts/common.sh@344 -- # : 1 00:05:25.314 22:25:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:25.314 22:25:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.314 22:25:25 -- scripts/common.sh@364 -- # decimal 1 00:05:25.314 22:25:25 -- scripts/common.sh@352 -- # local d=1 00:05:25.314 22:25:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.314 22:25:25 -- scripts/common.sh@354 -- # echo 1 00:05:25.314 22:25:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:25.314 22:25:25 -- scripts/common.sh@365 -- # decimal 2 00:05:25.314 22:25:25 -- scripts/common.sh@352 -- # local d=2 00:05:25.314 22:25:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.314 22:25:25 -- scripts/common.sh@354 -- # echo 2 00:05:25.314 22:25:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:25.314 22:25:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:25.314 22:25:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:25.314 22:25:25 -- scripts/common.sh@367 -- # return 0 00:05:25.314 22:25:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.314 22:25:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:25.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.314 --rc genhtml_branch_coverage=1 00:05:25.314 --rc genhtml_function_coverage=1 00:05:25.314 --rc genhtml_legend=1 00:05:25.314 --rc geninfo_all_blocks=1 00:05:25.314 --rc geninfo_unexecuted_blocks=1 00:05:25.314 00:05:25.314 ' 00:05:25.314 22:25:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:25.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.314 --rc genhtml_branch_coverage=1 00:05:25.314 --rc genhtml_function_coverage=1 00:05:25.314 --rc genhtml_legend=1 00:05:25.314 --rc geninfo_all_blocks=1 00:05:25.314 --rc geninfo_unexecuted_blocks=1 00:05:25.314 00:05:25.314 ' 00:05:25.314 22:25:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:25.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.314 --rc genhtml_branch_coverage=1 00:05:25.314 --rc genhtml_function_coverage=1 00:05:25.314 --rc genhtml_legend=1 00:05:25.314 --rc geninfo_all_blocks=1 00:05:25.314 --rc geninfo_unexecuted_blocks=1 00:05:25.314 00:05:25.314 ' 00:05:25.314 22:25:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:25.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.314 --rc genhtml_branch_coverage=1 00:05:25.314 --rc genhtml_function_coverage=1 00:05:25.314 --rc genhtml_legend=1 00:05:25.314 --rc geninfo_all_blocks=1 00:05:25.314 --rc geninfo_unexecuted_blocks=1 00:05:25.314 00:05:25.314 ' 00:05:25.314 22:25:25 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:25.314 22:25:25 -- bdev/nbd_common.sh@6 -- # set -e 00:05:25.314 22:25:25 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:25.314 22:25:25 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:25.314 22:25:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.314 22:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:25.314 ************************************ 00:05:25.314 START TEST event_perf 00:05:25.314 ************************************ 00:05:25.314 22:25:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:25.314 Running I/O for 1 seconds...[2024-11-20 22:25:26.003041] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:25.314 [2024-11-20 22:25:26.003829] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68345 ] 00:05:25.573 [2024-11-20 22:25:26.139942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:25.573 [2024-11-20 22:25:26.195313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.573 [2024-11-20 22:25:26.195444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.573 [2024-11-20 22:25:26.195575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.573 [2024-11-20 22:25:26.195583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.951 Running I/O for 1 seconds... 00:05:26.951 lcore 0: 128491 00:05:26.951 lcore 1: 128490 00:05:26.951 lcore 2: 128489 00:05:26.951 lcore 3: 128489 00:05:26.951 done. 00:05:26.951 ************************************ 00:05:26.951 END TEST event_perf 00:05:26.951 ************************************ 00:05:26.951 00:05:26.951 real 0m1.271s 00:05:26.951 user 0m4.079s 00:05:26.951 sys 0m0.056s 00:05:26.951 22:25:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:26.951 22:25:27 -- common/autotest_common.sh@10 -- # set +x 00:05:26.951 22:25:27 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:26.951 22:25:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:26.951 22:25:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.951 22:25:27 -- common/autotest_common.sh@10 -- # set +x 00:05:26.951 ************************************ 00:05:26.951 START TEST event_reactor 00:05:26.951 ************************************ 00:05:26.951 22:25:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:26.951 [2024-11-20 22:25:27.329229] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:26.951 [2024-11-20 22:25:27.329333] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68389 ] 00:05:26.951 [2024-11-20 22:25:27.462717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.951 [2024-11-20 22:25:27.516816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.887 test_start 00:05:27.887 oneshot 00:05:27.887 tick 100 00:05:27.887 tick 100 00:05:27.887 tick 250 00:05:27.887 tick 100 00:05:27.887 tick 100 00:05:27.887 tick 100 00:05:27.887 tick 250 00:05:27.887 tick 500 00:05:27.887 tick 100 00:05:27.887 tick 100 00:05:27.887 tick 250 00:05:27.887 tick 100 00:05:27.887 tick 100 00:05:27.887 test_end 00:05:27.887 ************************************ 00:05:27.887 END TEST event_reactor 00:05:27.887 00:05:27.887 real 0m1.254s 00:05:27.887 user 0m1.087s 00:05:27.887 sys 0m0.062s 00:05:27.887 22:25:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:27.887 22:25:28 -- common/autotest_common.sh@10 -- # set +x 00:05:27.887 ************************************ 00:05:27.887 22:25:28 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:27.887 22:25:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:27.887 22:25:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.887 22:25:28 -- common/autotest_common.sh@10 -- # set +x 00:05:28.146 ************************************ 00:05:28.146 START TEST event_reactor_perf 00:05:28.146 ************************************ 00:05:28.146 22:25:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:28.146 [2024-11-20 22:25:28.634605] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:28.146 [2024-11-20 22:25:28.634891] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68419 ] 00:05:28.146 [2024-11-20 22:25:28.770163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.146 [2024-11-20 22:25:28.826778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.523 test_start 00:05:29.523 test_end 00:05:29.523 Performance: 470080 events per second 00:05:29.523 00:05:29.523 real 0m1.271s 00:05:29.523 user 0m1.110s 00:05:29.523 sys 0m0.056s 00:05:29.523 ************************************ 00:05:29.523 END TEST event_reactor_perf 00:05:29.523 22:25:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.523 22:25:29 -- common/autotest_common.sh@10 -- # set +x 00:05:29.523 ************************************ 00:05:29.523 22:25:29 -- event/event.sh@49 -- # uname -s 00:05:29.523 22:25:29 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:29.523 22:25:29 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:29.523 22:25:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.523 22:25:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.523 22:25:29 -- common/autotest_common.sh@10 -- # set +x 00:05:29.523 ************************************ 00:05:29.523 START TEST event_scheduler 00:05:29.523 ************************************ 00:05:29.523 22:25:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:29.523 * Looking for test storage... 00:05:29.523 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:29.523 22:25:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:29.523 22:25:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:29.523 22:25:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:29.523 22:25:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:29.523 22:25:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:29.523 22:25:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:29.523 22:25:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:29.523 22:25:30 -- scripts/common.sh@335 -- # IFS=.-: 00:05:29.523 22:25:30 -- scripts/common.sh@335 -- # read -ra ver1 00:05:29.523 22:25:30 -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.523 22:25:30 -- scripts/common.sh@336 -- # read -ra ver2 00:05:29.523 22:25:30 -- scripts/common.sh@337 -- # local 'op=<' 00:05:29.523 22:25:30 -- scripts/common.sh@339 -- # ver1_l=2 00:05:29.523 22:25:30 -- scripts/common.sh@340 -- # ver2_l=1 00:05:29.523 22:25:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:29.523 22:25:30 -- scripts/common.sh@343 -- # case "$op" in 00:05:29.523 22:25:30 -- scripts/common.sh@344 -- # : 1 00:05:29.523 22:25:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:29.523 22:25:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.523 22:25:30 -- scripts/common.sh@364 -- # decimal 1 00:05:29.523 22:25:30 -- scripts/common.sh@352 -- # local d=1 00:05:29.523 22:25:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.523 22:25:30 -- scripts/common.sh@354 -- # echo 1 00:05:29.523 22:25:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:29.523 22:25:30 -- scripts/common.sh@365 -- # decimal 2 00:05:29.523 22:25:30 -- scripts/common.sh@352 -- # local d=2 00:05:29.523 22:25:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.523 22:25:30 -- scripts/common.sh@354 -- # echo 2 00:05:29.523 22:25:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:29.523 22:25:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:29.523 22:25:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:29.523 22:25:30 -- scripts/common.sh@367 -- # return 0 00:05:29.523 22:25:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.523 22:25:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:29.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.523 --rc genhtml_branch_coverage=1 00:05:29.523 --rc genhtml_function_coverage=1 00:05:29.523 --rc genhtml_legend=1 00:05:29.523 --rc geninfo_all_blocks=1 00:05:29.523 --rc geninfo_unexecuted_blocks=1 00:05:29.523 00:05:29.523 ' 00:05:29.523 22:25:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:29.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.523 --rc genhtml_branch_coverage=1 00:05:29.523 --rc genhtml_function_coverage=1 00:05:29.523 --rc genhtml_legend=1 00:05:29.523 --rc geninfo_all_blocks=1 00:05:29.523 --rc geninfo_unexecuted_blocks=1 00:05:29.523 00:05:29.523 ' 00:05:29.523 22:25:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:29.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.523 --rc genhtml_branch_coverage=1 00:05:29.523 --rc genhtml_function_coverage=1 00:05:29.523 --rc genhtml_legend=1 00:05:29.523 --rc geninfo_all_blocks=1 00:05:29.523 --rc geninfo_unexecuted_blocks=1 00:05:29.523 00:05:29.523 ' 00:05:29.523 22:25:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:29.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.523 --rc genhtml_branch_coverage=1 00:05:29.523 --rc genhtml_function_coverage=1 00:05:29.523 --rc genhtml_legend=1 00:05:29.523 --rc geninfo_all_blocks=1 00:05:29.523 --rc geninfo_unexecuted_blocks=1 00:05:29.523 00:05:29.523 ' 00:05:29.523 22:25:30 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:29.523 22:25:30 -- scheduler/scheduler.sh@35 -- # scheduler_pid=68483 00:05:29.523 22:25:30 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.523 22:25:30 -- scheduler/scheduler.sh@37 -- # waitforlisten 68483 00:05:29.523 22:25:30 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:29.523 22:25:30 -- common/autotest_common.sh@829 -- # '[' -z 68483 ']' 00:05:29.523 22:25:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.523 22:25:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.523 22:25:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.523 22:25:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.523 22:25:30 -- common/autotest_common.sh@10 -- # set +x 00:05:29.523 [2024-11-20 22:25:30.197946] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:29.523 [2024-11-20 22:25:30.198046] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68483 ] 00:05:29.782 [2024-11-20 22:25:30.336377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:29.782 [2024-11-20 22:25:30.410335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.782 [2024-11-20 22:25:30.410391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.782 [2024-11-20 22:25:30.410530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.782 [2024-11-20 22:25:30.412634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.719 22:25:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.719 22:25:31 -- common/autotest_common.sh@862 -- # return 0 00:05:30.719 22:25:31 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:30.719 22:25:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.719 22:25:31 -- common/autotest_common.sh@10 -- # set +x 00:05:30.719 POWER: Env isn't set yet! 00:05:30.719 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:30.719 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:30.719 POWER: Cannot set governor of lcore 0 to userspace 00:05:30.719 POWER: Attempting to initialise PSTAT power management... 00:05:30.719 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:30.719 POWER: Cannot set governor of lcore 0 to performance 00:05:30.719 POWER: Attempting to initialise CPPC power management... 00:05:30.719 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:30.719 POWER: Cannot set governor of lcore 0 to userspace 00:05:30.719 POWER: Attempting to initialise VM power management... 00:05:30.719 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:30.719 POWER: Unable to set Power Management Environment for lcore 0 00:05:30.719 [2024-11-20 22:25:31.213962] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:30.719 [2024-11-20 22:25:31.213975] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:30.719 [2024-11-20 22:25:31.213982] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:30.719 [2024-11-20 22:25:31.213994] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:30.719 [2024-11-20 22:25:31.214001] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:30.719 [2024-11-20 22:25:31.214006] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:30.719 22:25:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.719 22:25:31 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:30.719 22:25:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.719 22:25:31 -- common/autotest_common.sh@10 -- # set +x 00:05:30.719 [2024-11-20 22:25:31.331143] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:30.719 22:25:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.719 22:25:31 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:30.719 22:25:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.719 22:25:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.719 22:25:31 -- common/autotest_common.sh@10 -- # set +x 00:05:30.719 ************************************ 00:05:30.719 START TEST scheduler_create_thread 00:05:30.719 ************************************ 00:05:30.719 22:25:31 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:30.719 22:25:31 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:30.719 22:25:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.719 22:25:31 -- common/autotest_common.sh@10 -- # set +x 00:05:30.719 2 00:05:30.719 22:25:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.719 22:25:31 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:30.719 22:25:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.719 22:25:31 -- common/autotest_common.sh@10 -- # set +x 00:05:30.719 3 00:05:30.719 22:25:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.719 22:25:31 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:30.719 22:25:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.719 22:25:31 -- common/autotest_common.sh@10 -- # set +x 00:05:30.719 4 00:05:30.719 22:25:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.719 22:25:31 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:30.719 22:25:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.719 22:25:31 -- common/autotest_common.sh@10 -- # set +x 00:05:30.719 5 00:05:30.719 22:25:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.719 22:25:31 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:30.719 22:25:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.719 22:25:31 -- common/autotest_common.sh@10 -- # set +x 00:05:30.719 6 00:05:30.719 22:25:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.719 22:25:31 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:30.719 22:25:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.719 22:25:31 -- common/autotest_common.sh@10 -- # set +x 00:05:30.719 7 00:05:30.719 22:25:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.719 22:25:31 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:30.719 22:25:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.719 22:25:31 -- common/autotest_common.sh@10 -- # set +x 00:05:30.719 8 00:05:30.719 22:25:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.719 22:25:31 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:30.719 22:25:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.719 22:25:31 -- common/autotest_common.sh@10 -- # set +x 00:05:30.719 9 00:05:30.719 22:25:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.719 22:25:31 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:30.719 22:25:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.719 22:25:31 -- common/autotest_common.sh@10 -- # set +x 00:05:30.719 10 00:05:30.719 22:25:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.719 22:25:31 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:30.719 22:25:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.719 22:25:31 -- common/autotest_common.sh@10 -- # set +x 00:05:30.719 22:25:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.719 22:25:31 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:30.719 22:25:31 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:30.719 22:25:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.719 22:25:31 -- common/autotest_common.sh@10 -- # set +x 00:05:30.719 22:25:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.719 22:25:31 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:30.719 22:25:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.719 22:25:31 -- common/autotest_common.sh@10 -- # set +x 00:05:32.622 22:25:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.622 22:25:32 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:32.622 22:25:32 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:32.622 22:25:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.622 22:25:32 -- common/autotest_common.sh@10 -- # set +x 00:05:33.558 22:25:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.558 00:05:33.558 real 0m2.614s 00:05:33.558 user 0m0.014s 00:05:33.558 sys 0m0.003s 00:05:33.558 22:25:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.558 ************************************ 00:05:33.558 END TEST scheduler_create_thread 00:05:33.558 ************************************ 00:05:33.558 22:25:33 -- common/autotest_common.sh@10 -- # set +x 00:05:33.558 22:25:33 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:33.558 22:25:33 -- scheduler/scheduler.sh@46 -- # killprocess 68483 00:05:33.558 22:25:33 -- common/autotest_common.sh@936 -- # '[' -z 68483 ']' 00:05:33.558 22:25:33 -- common/autotest_common.sh@940 -- # kill -0 68483 00:05:33.558 22:25:33 -- common/autotest_common.sh@941 -- # uname 00:05:33.558 22:25:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:33.558 22:25:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68483 00:05:33.558 22:25:34 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:33.558 22:25:34 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:33.558 killing process with pid 68483 00:05:33.558 22:25:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68483' 00:05:33.558 22:25:34 -- common/autotest_common.sh@955 -- # kill 68483 00:05:33.558 22:25:34 -- common/autotest_common.sh@960 -- # wait 68483 00:05:33.817 [2024-11-20 22:25:34.434549] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:34.076 00:05:34.076 real 0m4.676s 00:05:34.076 user 0m8.830s 00:05:34.076 sys 0m0.464s 00:05:34.076 22:25:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.076 22:25:34 -- common/autotest_common.sh@10 -- # set +x 00:05:34.076 ************************************ 00:05:34.076 END TEST event_scheduler 00:05:34.076 ************************************ 00:05:34.076 22:25:34 -- event/event.sh@51 -- # modprobe -n nbd 00:05:34.076 22:25:34 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:34.076 22:25:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.076 22:25:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.076 22:25:34 -- common/autotest_common.sh@10 -- # set +x 00:05:34.076 ************************************ 00:05:34.076 START TEST app_repeat 00:05:34.076 ************************************ 00:05:34.076 22:25:34 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:34.076 22:25:34 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.076 22:25:34 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.076 22:25:34 -- event/event.sh@13 -- # local nbd_list 00:05:34.076 22:25:34 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.076 22:25:34 -- event/event.sh@14 -- # local bdev_list 00:05:34.076 22:25:34 -- event/event.sh@15 -- # local repeat_times=4 00:05:34.076 22:25:34 -- event/event.sh@17 -- # modprobe nbd 00:05:34.076 22:25:34 -- event/event.sh@19 -- # repeat_pid=68605 00:05:34.076 22:25:34 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:34.076 22:25:34 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.076 Process app_repeat pid: 68605 00:05:34.076 22:25:34 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 68605' 00:05:34.076 22:25:34 -- event/event.sh@23 -- # for i in {0..2} 00:05:34.076 spdk_app_start Round 0 00:05:34.076 22:25:34 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:34.076 22:25:34 -- event/event.sh@25 -- # waitforlisten 68605 /var/tmp/spdk-nbd.sock 00:05:34.076 22:25:34 -- common/autotest_common.sh@829 -- # '[' -z 68605 ']' 00:05:34.076 22:25:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.076 22:25:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.076 22:25:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.076 22:25:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.076 22:25:34 -- common/autotest_common.sh@10 -- # set +x 00:05:34.076 [2024-11-20 22:25:34.721647] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:34.076 [2024-11-20 22:25:34.721735] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68605 ] 00:05:34.335 [2024-11-20 22:25:34.850545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.335 [2024-11-20 22:25:34.916174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.335 [2024-11-20 22:25:34.916199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.031 22:25:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.031 22:25:35 -- common/autotest_common.sh@862 -- # return 0 00:05:35.031 22:25:35 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.348 Malloc0 00:05:35.348 22:25:35 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.619 Malloc1 00:05:35.619 22:25:36 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.619 22:25:36 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.619 22:25:36 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.619 22:25:36 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.619 22:25:36 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.619 22:25:36 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.619 22:25:36 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.619 22:25:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.619 22:25:36 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.619 22:25:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.619 22:25:36 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.619 22:25:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.619 22:25:36 -- bdev/nbd_common.sh@12 -- # local i 00:05:35.619 22:25:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.619 22:25:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.619 22:25:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.879 /dev/nbd0 00:05:35.879 22:25:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.879 22:25:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.879 22:25:36 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:35.879 22:25:36 -- common/autotest_common.sh@867 -- # local i 00:05:35.879 22:25:36 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:35.879 22:25:36 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:35.879 22:25:36 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:35.879 22:25:36 -- common/autotest_common.sh@871 -- # break 00:05:35.879 22:25:36 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:35.879 22:25:36 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:35.879 22:25:36 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.879 1+0 records in 00:05:35.879 1+0 records out 00:05:35.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225944 s, 18.1 MB/s 00:05:35.879 22:25:36 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.879 22:25:36 -- common/autotest_common.sh@884 -- # size=4096 00:05:35.879 22:25:36 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.879 22:25:36 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:35.879 22:25:36 -- common/autotest_common.sh@887 -- # return 0 00:05:35.879 22:25:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.879 22:25:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.879 22:25:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.138 /dev/nbd1 00:05:36.138 22:25:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.138 22:25:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.138 22:25:36 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:36.138 22:25:36 -- common/autotest_common.sh@867 -- # local i 00:05:36.138 22:25:36 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:36.138 22:25:36 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:36.138 22:25:36 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:36.138 22:25:36 -- common/autotest_common.sh@871 -- # break 00:05:36.138 22:25:36 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:36.138 22:25:36 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:36.138 22:25:36 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.138 1+0 records in 00:05:36.138 1+0 records out 00:05:36.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327553 s, 12.5 MB/s 00:05:36.138 22:25:36 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.138 22:25:36 -- common/autotest_common.sh@884 -- # size=4096 00:05:36.138 22:25:36 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.138 22:25:36 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:36.138 22:25:36 -- common/autotest_common.sh@887 -- # return 0 00:05:36.138 22:25:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.138 22:25:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.138 22:25:36 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.138 22:25:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.138 22:25:36 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.397 22:25:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.397 { 00:05:36.397 "bdev_name": "Malloc0", 00:05:36.397 "nbd_device": "/dev/nbd0" 00:05:36.397 }, 00:05:36.397 { 00:05:36.397 "bdev_name": "Malloc1", 00:05:36.397 "nbd_device": "/dev/nbd1" 00:05:36.397 } 00:05:36.397 ]' 00:05:36.397 22:25:37 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.397 { 00:05:36.397 "bdev_name": "Malloc0", 00:05:36.397 "nbd_device": "/dev/nbd0" 00:05:36.397 }, 00:05:36.397 { 00:05:36.397 "bdev_name": "Malloc1", 00:05:36.397 "nbd_device": "/dev/nbd1" 00:05:36.397 } 00:05:36.397 ]' 00:05:36.397 22:25:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.397 22:25:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.397 /dev/nbd1' 00:05:36.397 22:25:37 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.397 /dev/nbd1' 00:05:36.397 22:25:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.397 22:25:37 -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.397 22:25:37 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.398 22:25:37 -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.398 22:25:37 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.398 22:25:37 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.398 22:25:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.398 22:25:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.398 22:25:37 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.398 22:25:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.398 22:25:37 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.398 22:25:37 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.398 256+0 records in 00:05:36.398 256+0 records out 00:05:36.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00808386 s, 130 MB/s 00:05:36.398 22:25:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.398 22:25:37 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.656 256+0 records in 00:05:36.656 256+0 records out 00:05:36.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238509 s, 44.0 MB/s 00:05:36.657 22:25:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.657 22:25:37 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.657 256+0 records in 00:05:36.657 256+0 records out 00:05:36.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248539 s, 42.2 MB/s 00:05:36.657 22:25:37 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.657 22:25:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.657 22:25:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.657 22:25:37 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.657 22:25:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.657 22:25:37 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.657 22:25:37 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.657 22:25:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.657 22:25:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.657 22:25:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.657 22:25:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.657 22:25:37 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.657 22:25:37 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.657 22:25:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.657 22:25:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.657 22:25:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.657 22:25:37 -- bdev/nbd_common.sh@51 -- # local i 00:05:36.657 22:25:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.657 22:25:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.916 22:25:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.916 22:25:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.916 22:25:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.916 22:25:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.916 22:25:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.916 22:25:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.916 22:25:37 -- bdev/nbd_common.sh@41 -- # break 00:05:36.916 22:25:37 -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.916 22:25:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.916 22:25:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.175 22:25:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.175 22:25:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.175 22:25:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.175 22:25:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.175 22:25:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.175 22:25:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.175 22:25:37 -- bdev/nbd_common.sh@41 -- # break 00:05:37.175 22:25:37 -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.175 22:25:37 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.175 22:25:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.175 22:25:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.434 22:25:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.434 22:25:38 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.434 22:25:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.434 22:25:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.434 22:25:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.434 22:25:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.434 22:25:38 -- bdev/nbd_common.sh@65 -- # true 00:05:37.434 22:25:38 -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.434 22:25:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.434 22:25:38 -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.434 22:25:38 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.434 22:25:38 -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.434 22:25:38 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.692 22:25:38 -- event/event.sh@35 -- # sleep 3 00:05:37.951 [2024-11-20 22:25:38.647558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.210 [2024-11-20 22:25:38.698765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.210 [2024-11-20 22:25:38.698785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.210 [2024-11-20 22:25:38.769207] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:38.210 [2024-11-20 22:25:38.769295] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.745 22:25:41 -- event/event.sh@23 -- # for i in {0..2} 00:05:40.745 spdk_app_start Round 1 00:05:40.745 22:25:41 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:40.745 22:25:41 -- event/event.sh@25 -- # waitforlisten 68605 /var/tmp/spdk-nbd.sock 00:05:40.745 22:25:41 -- common/autotest_common.sh@829 -- # '[' -z 68605 ']' 00:05:40.745 22:25:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.745 22:25:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.745 22:25:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.745 22:25:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.745 22:25:41 -- common/autotest_common.sh@10 -- # set +x 00:05:41.004 22:25:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.004 22:25:41 -- common/autotest_common.sh@862 -- # return 0 00:05:41.004 22:25:41 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.261 Malloc0 00:05:41.261 22:25:41 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.519 Malloc1 00:05:41.519 22:25:42 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.519 22:25:42 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.519 22:25:42 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.519 22:25:42 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:41.519 22:25:42 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.519 22:25:42 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:41.519 22:25:42 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.519 22:25:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.519 22:25:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.519 22:25:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:41.519 22:25:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.519 22:25:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:41.519 22:25:42 -- bdev/nbd_common.sh@12 -- # local i 00:05:41.519 22:25:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:41.519 22:25:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.519 22:25:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.778 /dev/nbd0 00:05:41.778 22:25:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.778 22:25:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.778 22:25:42 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:41.778 22:25:42 -- common/autotest_common.sh@867 -- # local i 00:05:41.778 22:25:42 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:41.778 22:25:42 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:41.778 22:25:42 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:41.778 22:25:42 -- common/autotest_common.sh@871 -- # break 00:05:41.778 22:25:42 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:41.778 22:25:42 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:41.778 22:25:42 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.778 1+0 records in 00:05:41.778 1+0 records out 00:05:41.778 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285825 s, 14.3 MB/s 00:05:41.778 22:25:42 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.778 22:25:42 -- common/autotest_common.sh@884 -- # size=4096 00:05:41.778 22:25:42 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.778 22:25:42 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:41.778 22:25:42 -- common/autotest_common.sh@887 -- # return 0 00:05:41.778 22:25:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.778 22:25:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.778 22:25:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:42.037 /dev/nbd1 00:05:42.037 22:25:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:42.037 22:25:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:42.037 22:25:42 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:42.037 22:25:42 -- common/autotest_common.sh@867 -- # local i 00:05:42.037 22:25:42 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:42.037 22:25:42 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:42.037 22:25:42 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:42.037 22:25:42 -- common/autotest_common.sh@871 -- # break 00:05:42.037 22:25:42 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:42.037 22:25:42 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:42.037 22:25:42 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.037 1+0 records in 00:05:42.037 1+0 records out 00:05:42.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322417 s, 12.7 MB/s 00:05:42.037 22:25:42 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.037 22:25:42 -- common/autotest_common.sh@884 -- # size=4096 00:05:42.037 22:25:42 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.037 22:25:42 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:42.037 22:25:42 -- common/autotest_common.sh@887 -- # return 0 00:05:42.037 22:25:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.037 22:25:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.037 22:25:42 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.037 22:25:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.037 22:25:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.296 22:25:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:42.296 { 00:05:42.296 "bdev_name": "Malloc0", 00:05:42.296 "nbd_device": "/dev/nbd0" 00:05:42.296 }, 00:05:42.296 { 00:05:42.296 "bdev_name": "Malloc1", 00:05:42.296 "nbd_device": "/dev/nbd1" 00:05:42.296 } 00:05:42.296 ]' 00:05:42.296 22:25:42 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:42.296 { 00:05:42.296 "bdev_name": "Malloc0", 00:05:42.296 "nbd_device": "/dev/nbd0" 00:05:42.296 }, 00:05:42.296 { 00:05:42.296 "bdev_name": "Malloc1", 00:05:42.296 "nbd_device": "/dev/nbd1" 00:05:42.296 } 00:05:42.296 ]' 00:05:42.296 22:25:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.296 22:25:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:42.296 /dev/nbd1' 00:05:42.296 22:25:42 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:42.296 /dev/nbd1' 00:05:42.296 22:25:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.296 22:25:42 -- bdev/nbd_common.sh@65 -- # count=2 00:05:42.296 22:25:42 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:42.296 22:25:42 -- bdev/nbd_common.sh@95 -- # count=2 00:05:42.296 22:25:42 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:42.296 22:25:42 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:42.296 22:25:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.296 22:25:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.296 22:25:42 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:42.296 22:25:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.296 22:25:42 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:42.296 22:25:42 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:42.296 256+0 records in 00:05:42.296 256+0 records out 00:05:42.296 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00665256 s, 158 MB/s 00:05:42.296 22:25:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.296 22:25:42 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:42.296 256+0 records in 00:05:42.296 256+0 records out 00:05:42.296 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223666 s, 46.9 MB/s 00:05:42.296 22:25:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.296 22:25:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:42.555 256+0 records in 00:05:42.555 256+0 records out 00:05:42.555 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238077 s, 44.0 MB/s 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@51 -- # local i 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@41 -- # break 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.555 22:25:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:43.122 22:25:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:43.122 22:25:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:43.122 22:25:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:43.122 22:25:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.122 22:25:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.122 22:25:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:43.122 22:25:43 -- bdev/nbd_common.sh@41 -- # break 00:05:43.122 22:25:43 -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.122 22:25:43 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.122 22:25:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.122 22:25:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.380 22:25:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:43.380 22:25:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:43.380 22:25:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.380 22:25:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:43.380 22:25:43 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:43.380 22:25:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.380 22:25:43 -- bdev/nbd_common.sh@65 -- # true 00:05:43.380 22:25:43 -- bdev/nbd_common.sh@65 -- # count=0 00:05:43.380 22:25:43 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:43.380 22:25:43 -- bdev/nbd_common.sh@104 -- # count=0 00:05:43.380 22:25:43 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:43.380 22:25:43 -- bdev/nbd_common.sh@109 -- # return 0 00:05:43.380 22:25:43 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:43.639 22:25:44 -- event/event.sh@35 -- # sleep 3 00:05:43.897 [2024-11-20 22:25:44.493703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.897 [2024-11-20 22:25:44.544923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.897 [2024-11-20 22:25:44.544946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.897 [2024-11-20 22:25:44.615330] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:43.897 [2024-11-20 22:25:44.615394] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:47.183 22:25:47 -- event/event.sh@23 -- # for i in {0..2} 00:05:47.183 spdk_app_start Round 2 00:05:47.183 22:25:47 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:47.183 22:25:47 -- event/event.sh@25 -- # waitforlisten 68605 /var/tmp/spdk-nbd.sock 00:05:47.183 22:25:47 -- common/autotest_common.sh@829 -- # '[' -z 68605 ']' 00:05:47.183 22:25:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.183 22:25:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.183 22:25:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.183 22:25:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.183 22:25:47 -- common/autotest_common.sh@10 -- # set +x 00:05:47.183 22:25:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.183 22:25:47 -- common/autotest_common.sh@862 -- # return 0 00:05:47.183 22:25:47 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.183 Malloc0 00:05:47.183 22:25:47 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.442 Malloc1 00:05:47.442 22:25:47 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.442 22:25:47 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.442 22:25:47 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.442 22:25:47 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:47.442 22:25:47 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.442 22:25:47 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:47.442 22:25:47 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.442 22:25:47 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.442 22:25:47 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.442 22:25:47 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:47.442 22:25:47 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.442 22:25:47 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:47.442 22:25:47 -- bdev/nbd_common.sh@12 -- # local i 00:05:47.442 22:25:47 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:47.442 22:25:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.442 22:25:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:47.442 /dev/nbd0 00:05:47.701 22:25:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:47.701 22:25:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:47.701 22:25:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:47.701 22:25:48 -- common/autotest_common.sh@867 -- # local i 00:05:47.701 22:25:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:47.701 22:25:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:47.701 22:25:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:47.701 22:25:48 -- common/autotest_common.sh@871 -- # break 00:05:47.701 22:25:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:47.701 22:25:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:47.701 22:25:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.701 1+0 records in 00:05:47.701 1+0 records out 00:05:47.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290844 s, 14.1 MB/s 00:05:47.701 22:25:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.701 22:25:48 -- common/autotest_common.sh@884 -- # size=4096 00:05:47.701 22:25:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.701 22:25:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:47.701 22:25:48 -- common/autotest_common.sh@887 -- # return 0 00:05:47.701 22:25:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.701 22:25:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.701 22:25:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:47.960 /dev/nbd1 00:05:47.960 22:25:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:47.960 22:25:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:47.960 22:25:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:47.960 22:25:48 -- common/autotest_common.sh@867 -- # local i 00:05:47.960 22:25:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:47.960 22:25:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:47.961 22:25:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:47.961 22:25:48 -- common/autotest_common.sh@871 -- # break 00:05:47.961 22:25:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:47.961 22:25:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:47.961 22:25:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.961 1+0 records in 00:05:47.961 1+0 records out 00:05:47.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230741 s, 17.8 MB/s 00:05:47.961 22:25:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.961 22:25:48 -- common/autotest_common.sh@884 -- # size=4096 00:05:47.961 22:25:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.961 22:25:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:47.961 22:25:48 -- common/autotest_common.sh@887 -- # return 0 00:05:47.961 22:25:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.961 22:25:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.961 22:25:48 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.961 22:25:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.961 22:25:48 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.225 22:25:48 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:48.225 { 00:05:48.225 "bdev_name": "Malloc0", 00:05:48.225 "nbd_device": "/dev/nbd0" 00:05:48.225 }, 00:05:48.225 { 00:05:48.225 "bdev_name": "Malloc1", 00:05:48.225 "nbd_device": "/dev/nbd1" 00:05:48.225 } 00:05:48.225 ]' 00:05:48.225 22:25:48 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.225 22:25:48 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:48.225 { 00:05:48.225 "bdev_name": "Malloc0", 00:05:48.225 "nbd_device": "/dev/nbd0" 00:05:48.225 }, 00:05:48.225 { 00:05:48.225 "bdev_name": "Malloc1", 00:05:48.225 "nbd_device": "/dev/nbd1" 00:05:48.225 } 00:05:48.225 ]' 00:05:48.225 22:25:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:48.225 /dev/nbd1' 00:05:48.225 22:25:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.225 22:25:48 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:48.225 /dev/nbd1' 00:05:48.225 22:25:48 -- bdev/nbd_common.sh@65 -- # count=2 00:05:48.225 22:25:48 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:48.225 22:25:48 -- bdev/nbd_common.sh@95 -- # count=2 00:05:48.225 22:25:48 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:48.225 22:25:48 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:48.225 22:25:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.225 22:25:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.225 22:25:48 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:48.225 22:25:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:48.225 22:25:48 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:48.225 22:25:48 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:48.225 256+0 records in 00:05:48.225 256+0 records out 00:05:48.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105843 s, 99.1 MB/s 00:05:48.225 22:25:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.225 22:25:48 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:48.225 256+0 records in 00:05:48.225 256+0 records out 00:05:48.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247622 s, 42.3 MB/s 00:05:48.225 22:25:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.225 22:25:48 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:48.225 256+0 records in 00:05:48.225 256+0 records out 00:05:48.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250231 s, 41.9 MB/s 00:05:48.225 22:25:48 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:48.225 22:25:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.225 22:25:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.225 22:25:48 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:48.226 22:25:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:48.226 22:25:48 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:48.226 22:25:48 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:48.226 22:25:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.226 22:25:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:48.226 22:25:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.226 22:25:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:48.226 22:25:48 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:48.226 22:25:48 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:48.226 22:25:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.226 22:25:48 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.226 22:25:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:48.226 22:25:48 -- bdev/nbd_common.sh@51 -- # local i 00:05:48.226 22:25:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.226 22:25:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:48.487 22:25:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:48.487 22:25:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:48.487 22:25:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:48.487 22:25:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.487 22:25:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.487 22:25:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:48.487 22:25:49 -- bdev/nbd_common.sh@41 -- # break 00:05:48.487 22:25:49 -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.487 22:25:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.487 22:25:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:48.745 22:25:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:48.745 22:25:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:48.745 22:25:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:48.745 22:25:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.745 22:25:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.745 22:25:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:48.745 22:25:49 -- bdev/nbd_common.sh@41 -- # break 00:05:48.745 22:25:49 -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.745 22:25:49 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.745 22:25:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.745 22:25:49 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.004 22:25:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.004 22:25:49 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.004 22:25:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.004 22:25:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:49.004 22:25:49 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:49.004 22:25:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.004 22:25:49 -- bdev/nbd_common.sh@65 -- # true 00:05:49.004 22:25:49 -- bdev/nbd_common.sh@65 -- # count=0 00:05:49.004 22:25:49 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:49.004 22:25:49 -- bdev/nbd_common.sh@104 -- # count=0 00:05:49.004 22:25:49 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:49.004 22:25:49 -- bdev/nbd_common.sh@109 -- # return 0 00:05:49.004 22:25:49 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:49.263 22:25:49 -- event/event.sh@35 -- # sleep 3 00:05:49.522 [2024-11-20 22:25:50.192682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.522 [2024-11-20 22:25:50.243792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.522 [2024-11-20 22:25:50.243817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.780 [2024-11-20 22:25:50.314445] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:49.780 [2024-11-20 22:25:50.314524] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:52.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.316 22:25:52 -- event/event.sh@38 -- # waitforlisten 68605 /var/tmp/spdk-nbd.sock 00:05:52.316 22:25:52 -- common/autotest_common.sh@829 -- # '[' -z 68605 ']' 00:05:52.316 22:25:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.316 22:25:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.316 22:25:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.316 22:25:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.316 22:25:52 -- common/autotest_common.sh@10 -- # set +x 00:05:52.575 22:25:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.575 22:25:53 -- common/autotest_common.sh@862 -- # return 0 00:05:52.575 22:25:53 -- event/event.sh@39 -- # killprocess 68605 00:05:52.575 22:25:53 -- common/autotest_common.sh@936 -- # '[' -z 68605 ']' 00:05:52.575 22:25:53 -- common/autotest_common.sh@940 -- # kill -0 68605 00:05:52.575 22:25:53 -- common/autotest_common.sh@941 -- # uname 00:05:52.575 22:25:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:52.575 22:25:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68605 00:05:52.575 22:25:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:52.575 22:25:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:52.575 killing process with pid 68605 00:05:52.575 22:25:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68605' 00:05:52.575 22:25:53 -- common/autotest_common.sh@955 -- # kill 68605 00:05:52.575 22:25:53 -- common/autotest_common.sh@960 -- # wait 68605 00:05:52.834 spdk_app_start is called in Round 0. 00:05:52.834 Shutdown signal received, stop current app iteration 00:05:52.834 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:05:52.834 spdk_app_start is called in Round 1. 00:05:52.834 Shutdown signal received, stop current app iteration 00:05:52.834 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:05:52.834 spdk_app_start is called in Round 2. 00:05:52.834 Shutdown signal received, stop current app iteration 00:05:52.834 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:05:52.834 spdk_app_start is called in Round 3. 00:05:52.834 Shutdown signal received, stop current app iteration 00:05:52.834 22:25:53 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:52.834 22:25:53 -- event/event.sh@42 -- # return 0 00:05:52.834 00:05:52.834 real 0m18.784s 00:05:52.834 user 0m42.169s 00:05:52.834 sys 0m2.865s 00:05:52.834 22:25:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.834 ************************************ 00:05:52.834 END TEST app_repeat 00:05:52.834 ************************************ 00:05:52.834 22:25:53 -- common/autotest_common.sh@10 -- # set +x 00:05:52.834 22:25:53 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:52.834 22:25:53 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:52.834 22:25:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:52.834 22:25:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.834 22:25:53 -- common/autotest_common.sh@10 -- # set +x 00:05:52.834 ************************************ 00:05:52.834 START TEST cpu_locks 00:05:52.834 ************************************ 00:05:52.834 22:25:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:53.094 * Looking for test storage... 00:05:53.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:53.094 22:25:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:53.094 22:25:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:53.094 22:25:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:53.094 22:25:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:53.094 22:25:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:53.094 22:25:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:53.094 22:25:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:53.094 22:25:53 -- scripts/common.sh@335 -- # IFS=.-: 00:05:53.094 22:25:53 -- scripts/common.sh@335 -- # read -ra ver1 00:05:53.094 22:25:53 -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.094 22:25:53 -- scripts/common.sh@336 -- # read -ra ver2 00:05:53.094 22:25:53 -- scripts/common.sh@337 -- # local 'op=<' 00:05:53.094 22:25:53 -- scripts/common.sh@339 -- # ver1_l=2 00:05:53.094 22:25:53 -- scripts/common.sh@340 -- # ver2_l=1 00:05:53.094 22:25:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:53.094 22:25:53 -- scripts/common.sh@343 -- # case "$op" in 00:05:53.094 22:25:53 -- scripts/common.sh@344 -- # : 1 00:05:53.094 22:25:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:53.094 22:25:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.094 22:25:53 -- scripts/common.sh@364 -- # decimal 1 00:05:53.094 22:25:53 -- scripts/common.sh@352 -- # local d=1 00:05:53.094 22:25:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.094 22:25:53 -- scripts/common.sh@354 -- # echo 1 00:05:53.094 22:25:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:53.094 22:25:53 -- scripts/common.sh@365 -- # decimal 2 00:05:53.094 22:25:53 -- scripts/common.sh@352 -- # local d=2 00:05:53.094 22:25:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.094 22:25:53 -- scripts/common.sh@354 -- # echo 2 00:05:53.094 22:25:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:53.094 22:25:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:53.094 22:25:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:53.094 22:25:53 -- scripts/common.sh@367 -- # return 0 00:05:53.094 22:25:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.094 22:25:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:53.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.094 --rc genhtml_branch_coverage=1 00:05:53.094 --rc genhtml_function_coverage=1 00:05:53.094 --rc genhtml_legend=1 00:05:53.094 --rc geninfo_all_blocks=1 00:05:53.094 --rc geninfo_unexecuted_blocks=1 00:05:53.094 00:05:53.094 ' 00:05:53.094 22:25:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:53.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.094 --rc genhtml_branch_coverage=1 00:05:53.094 --rc genhtml_function_coverage=1 00:05:53.094 --rc genhtml_legend=1 00:05:53.094 --rc geninfo_all_blocks=1 00:05:53.094 --rc geninfo_unexecuted_blocks=1 00:05:53.094 00:05:53.094 ' 00:05:53.094 22:25:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:53.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.094 --rc genhtml_branch_coverage=1 00:05:53.094 --rc genhtml_function_coverage=1 00:05:53.094 --rc genhtml_legend=1 00:05:53.094 --rc geninfo_all_blocks=1 00:05:53.094 --rc geninfo_unexecuted_blocks=1 00:05:53.094 00:05:53.094 ' 00:05:53.094 22:25:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:53.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.094 --rc genhtml_branch_coverage=1 00:05:53.094 --rc genhtml_function_coverage=1 00:05:53.094 --rc genhtml_legend=1 00:05:53.094 --rc geninfo_all_blocks=1 00:05:53.094 --rc geninfo_unexecuted_blocks=1 00:05:53.094 00:05:53.094 ' 00:05:53.094 22:25:53 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:53.094 22:25:53 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:53.094 22:25:53 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:53.094 22:25:53 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:53.094 22:25:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:53.094 22:25:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.094 22:25:53 -- common/autotest_common.sh@10 -- # set +x 00:05:53.094 ************************************ 00:05:53.094 START TEST default_locks 00:05:53.094 ************************************ 00:05:53.094 22:25:53 -- common/autotest_common.sh@1114 -- # default_locks 00:05:53.094 22:25:53 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69237 00:05:53.094 22:25:53 -- event/cpu_locks.sh@47 -- # waitforlisten 69237 00:05:53.094 22:25:53 -- common/autotest_common.sh@829 -- # '[' -z 69237 ']' 00:05:53.094 22:25:53 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.094 22:25:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.094 22:25:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.094 22:25:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.094 22:25:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.094 22:25:53 -- common/autotest_common.sh@10 -- # set +x 00:05:53.094 [2024-11-20 22:25:53.760835] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:53.094 [2024-11-20 22:25:53.761062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69237 ] 00:05:53.353 [2024-11-20 22:25:53.889923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.353 [2024-11-20 22:25:53.955975] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:53.353 [2024-11-20 22:25:53.956148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.290 22:25:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.290 22:25:54 -- common/autotest_common.sh@862 -- # return 0 00:05:54.290 22:25:54 -- event/cpu_locks.sh@49 -- # locks_exist 69237 00:05:54.290 22:25:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.290 22:25:54 -- event/cpu_locks.sh@22 -- # lslocks -p 69237 00:05:54.549 22:25:55 -- event/cpu_locks.sh@50 -- # killprocess 69237 00:05:54.549 22:25:55 -- common/autotest_common.sh@936 -- # '[' -z 69237 ']' 00:05:54.549 22:25:55 -- common/autotest_common.sh@940 -- # kill -0 69237 00:05:54.549 22:25:55 -- common/autotest_common.sh@941 -- # uname 00:05:54.549 22:25:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.549 22:25:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69237 00:05:54.549 killing process with pid 69237 00:05:54.549 22:25:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:54.549 22:25:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:54.549 22:25:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69237' 00:05:54.549 22:25:55 -- common/autotest_common.sh@955 -- # kill 69237 00:05:54.549 22:25:55 -- common/autotest_common.sh@960 -- # wait 69237 00:05:55.116 22:25:55 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69237 00:05:55.117 22:25:55 -- common/autotest_common.sh@650 -- # local es=0 00:05:55.117 22:25:55 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69237 00:05:55.117 22:25:55 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:55.117 22:25:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.117 22:25:55 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:55.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.117 ERROR: process (pid: 69237) is no longer running 00:05:55.117 22:25:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.117 22:25:55 -- common/autotest_common.sh@653 -- # waitforlisten 69237 00:05:55.117 22:25:55 -- common/autotest_common.sh@829 -- # '[' -z 69237 ']' 00:05:55.117 22:25:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.117 22:25:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.117 22:25:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.117 22:25:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.117 22:25:55 -- common/autotest_common.sh@10 -- # set +x 00:05:55.117 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69237) - No such process 00:05:55.117 22:25:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.117 22:25:55 -- common/autotest_common.sh@862 -- # return 1 00:05:55.117 22:25:55 -- common/autotest_common.sh@653 -- # es=1 00:05:55.117 22:25:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:55.117 22:25:55 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:55.117 22:25:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:55.117 22:25:55 -- event/cpu_locks.sh@54 -- # no_locks 00:05:55.117 22:25:55 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:55.117 22:25:55 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:55.117 22:25:55 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:55.117 00:05:55.117 real 0m1.910s 00:05:55.117 user 0m1.968s 00:05:55.117 sys 0m0.604s 00:05:55.117 22:25:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.117 ************************************ 00:05:55.117 END TEST default_locks 00:05:55.117 ************************************ 00:05:55.117 22:25:55 -- common/autotest_common.sh@10 -- # set +x 00:05:55.117 22:25:55 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:55.117 22:25:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.117 22:25:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.117 22:25:55 -- common/autotest_common.sh@10 -- # set +x 00:05:55.117 ************************************ 00:05:55.117 START TEST default_locks_via_rpc 00:05:55.117 ************************************ 00:05:55.117 22:25:55 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:55.117 22:25:55 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69301 00:05:55.117 22:25:55 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.117 22:25:55 -- event/cpu_locks.sh@63 -- # waitforlisten 69301 00:05:55.117 22:25:55 -- common/autotest_common.sh@829 -- # '[' -z 69301 ']' 00:05:55.117 22:25:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.117 22:25:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.117 22:25:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.117 22:25:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.117 22:25:55 -- common/autotest_common.sh@10 -- # set +x 00:05:55.117 [2024-11-20 22:25:55.736849] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:55.117 [2024-11-20 22:25:55.736955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69301 ] 00:05:55.376 [2024-11-20 22:25:55.874041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.376 [2024-11-20 22:25:55.931848] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.376 [2024-11-20 22:25:55.932028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.943 22:25:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.943 22:25:56 -- common/autotest_common.sh@862 -- # return 0 00:05:55.943 22:25:56 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:55.943 22:25:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.943 22:25:56 -- common/autotest_common.sh@10 -- # set +x 00:05:55.943 22:25:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.943 22:25:56 -- event/cpu_locks.sh@67 -- # no_locks 00:05:55.943 22:25:56 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:55.943 22:25:56 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:55.943 22:25:56 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:55.943 22:25:56 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:55.943 22:25:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.943 22:25:56 -- common/autotest_common.sh@10 -- # set +x 00:05:55.943 22:25:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.943 22:25:56 -- event/cpu_locks.sh@71 -- # locks_exist 69301 00:05:55.943 22:25:56 -- event/cpu_locks.sh@22 -- # lslocks -p 69301 00:05:55.943 22:25:56 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.511 22:25:57 -- event/cpu_locks.sh@73 -- # killprocess 69301 00:05:56.511 22:25:57 -- common/autotest_common.sh@936 -- # '[' -z 69301 ']' 00:05:56.511 22:25:57 -- common/autotest_common.sh@940 -- # kill -0 69301 00:05:56.511 22:25:57 -- common/autotest_common.sh@941 -- # uname 00:05:56.511 22:25:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:56.511 22:25:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69301 00:05:56.511 killing process with pid 69301 00:05:56.511 22:25:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:56.511 22:25:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:56.511 22:25:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69301' 00:05:56.511 22:25:57 -- common/autotest_common.sh@955 -- # kill 69301 00:05:56.511 22:25:57 -- common/autotest_common.sh@960 -- # wait 69301 00:05:57.079 00:05:57.079 real 0m1.951s 00:05:57.079 user 0m1.978s 00:05:57.079 sys 0m0.627s 00:05:57.079 22:25:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.079 22:25:57 -- common/autotest_common.sh@10 -- # set +x 00:05:57.079 ************************************ 00:05:57.079 END TEST default_locks_via_rpc 00:05:57.079 ************************************ 00:05:57.079 22:25:57 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:57.079 22:25:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.079 22:25:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.079 22:25:57 -- common/autotest_common.sh@10 -- # set +x 00:05:57.079 ************************************ 00:05:57.079 START TEST non_locking_app_on_locked_coremask 00:05:57.079 ************************************ 00:05:57.079 22:25:57 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:57.079 22:25:57 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69370 00:05:57.079 22:25:57 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.079 22:25:57 -- event/cpu_locks.sh@81 -- # waitforlisten 69370 /var/tmp/spdk.sock 00:05:57.079 22:25:57 -- common/autotest_common.sh@829 -- # '[' -z 69370 ']' 00:05:57.079 22:25:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.079 22:25:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.079 22:25:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.079 22:25:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.079 22:25:57 -- common/autotest_common.sh@10 -- # set +x 00:05:57.079 [2024-11-20 22:25:57.741420] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:57.079 [2024-11-20 22:25:57.741758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69370 ] 00:05:57.338 [2024-11-20 22:25:57.877987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.338 [2024-11-20 22:25:57.933784] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:57.338 [2024-11-20 22:25:57.933962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.275 22:25:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.275 22:25:58 -- common/autotest_common.sh@862 -- # return 0 00:05:58.275 22:25:58 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69393 00:05:58.275 22:25:58 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:58.275 22:25:58 -- event/cpu_locks.sh@85 -- # waitforlisten 69393 /var/tmp/spdk2.sock 00:05:58.275 22:25:58 -- common/autotest_common.sh@829 -- # '[' -z 69393 ']' 00:05:58.275 22:25:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.275 22:25:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.275 22:25:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.275 22:25:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.275 22:25:58 -- common/autotest_common.sh@10 -- # set +x 00:05:58.275 [2024-11-20 22:25:58.728364] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:58.275 [2024-11-20 22:25:58.728625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69393 ] 00:05:58.275 [2024-11-20 22:25:58.863254] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.275 [2024-11-20 22:25:58.863317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.275 [2024-11-20 22:25:58.998729] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:58.275 [2024-11-20 22:25:58.998894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.210 22:25:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.210 22:25:59 -- common/autotest_common.sh@862 -- # return 0 00:05:59.210 22:25:59 -- event/cpu_locks.sh@87 -- # locks_exist 69370 00:05:59.210 22:25:59 -- event/cpu_locks.sh@22 -- # lslocks -p 69370 00:05:59.210 22:25:59 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.778 22:26:00 -- event/cpu_locks.sh@89 -- # killprocess 69370 00:05:59.778 22:26:00 -- common/autotest_common.sh@936 -- # '[' -z 69370 ']' 00:05:59.778 22:26:00 -- common/autotest_common.sh@940 -- # kill -0 69370 00:05:59.778 22:26:00 -- common/autotest_common.sh@941 -- # uname 00:05:59.778 22:26:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:59.778 22:26:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69370 00:05:59.778 killing process with pid 69370 00:05:59.778 22:26:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:59.778 22:26:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:59.778 22:26:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69370' 00:05:59.778 22:26:00 -- common/autotest_common.sh@955 -- # kill 69370 00:05:59.778 22:26:00 -- common/autotest_common.sh@960 -- # wait 69370 00:06:00.714 22:26:01 -- event/cpu_locks.sh@90 -- # killprocess 69393 00:06:00.714 22:26:01 -- common/autotest_common.sh@936 -- # '[' -z 69393 ']' 00:06:00.714 22:26:01 -- common/autotest_common.sh@940 -- # kill -0 69393 00:06:00.714 22:26:01 -- common/autotest_common.sh@941 -- # uname 00:06:00.714 22:26:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:00.714 22:26:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69393 00:06:00.714 killing process with pid 69393 00:06:00.714 22:26:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:00.715 22:26:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:00.715 22:26:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69393' 00:06:00.715 22:26:01 -- common/autotest_common.sh@955 -- # kill 69393 00:06:00.715 22:26:01 -- common/autotest_common.sh@960 -- # wait 69393 00:06:01.282 ************************************ 00:06:01.282 END TEST non_locking_app_on_locked_coremask 00:06:01.282 ************************************ 00:06:01.282 00:06:01.282 real 0m4.198s 00:06:01.282 user 0m4.455s 00:06:01.282 sys 0m1.207s 00:06:01.282 22:26:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:01.282 22:26:01 -- common/autotest_common.sh@10 -- # set +x 00:06:01.282 22:26:01 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:01.282 22:26:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.282 22:26:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.282 22:26:01 -- common/autotest_common.sh@10 -- # set +x 00:06:01.282 ************************************ 00:06:01.282 START TEST locking_app_on_unlocked_coremask 00:06:01.282 ************************************ 00:06:01.282 22:26:01 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:01.282 22:26:01 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69477 00:06:01.282 22:26:01 -- event/cpu_locks.sh@99 -- # waitforlisten 69477 /var/tmp/spdk.sock 00:06:01.282 22:26:01 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:01.282 22:26:01 -- common/autotest_common.sh@829 -- # '[' -z 69477 ']' 00:06:01.282 22:26:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.282 22:26:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.282 22:26:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.282 22:26:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.282 22:26:01 -- common/autotest_common.sh@10 -- # set +x 00:06:01.282 [2024-11-20 22:26:01.997157] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:01.282 [2024-11-20 22:26:01.997472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69477 ] 00:06:01.541 [2024-11-20 22:26:02.134254] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.541 [2024-11-20 22:26:02.134642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.541 [2024-11-20 22:26:02.196269] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:01.541 [2024-11-20 22:26:02.196450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.477 22:26:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.477 22:26:02 -- common/autotest_common.sh@862 -- # return 0 00:06:02.477 22:26:02 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69505 00:06:02.477 22:26:02 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:02.477 22:26:02 -- event/cpu_locks.sh@103 -- # waitforlisten 69505 /var/tmp/spdk2.sock 00:06:02.477 22:26:02 -- common/autotest_common.sh@829 -- # '[' -z 69505 ']' 00:06:02.477 22:26:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.477 22:26:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.477 22:26:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.477 22:26:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.477 22:26:02 -- common/autotest_common.sh@10 -- # set +x 00:06:02.477 [2024-11-20 22:26:03.021859] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:02.477 [2024-11-20 22:26:03.021963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69505 ] 00:06:02.477 [2024-11-20 22:26:03.155369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.736 [2024-11-20 22:26:03.284759] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:02.736 [2024-11-20 22:26:03.284933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.303 22:26:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.303 22:26:03 -- common/autotest_common.sh@862 -- # return 0 00:06:03.303 22:26:03 -- event/cpu_locks.sh@105 -- # locks_exist 69505 00:06:03.303 22:26:03 -- event/cpu_locks.sh@22 -- # lslocks -p 69505 00:06:03.303 22:26:03 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.239 22:26:04 -- event/cpu_locks.sh@107 -- # killprocess 69477 00:06:04.239 22:26:04 -- common/autotest_common.sh@936 -- # '[' -z 69477 ']' 00:06:04.239 22:26:04 -- common/autotest_common.sh@940 -- # kill -0 69477 00:06:04.239 22:26:04 -- common/autotest_common.sh@941 -- # uname 00:06:04.239 22:26:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.239 22:26:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69477 00:06:04.239 22:26:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:04.239 killing process with pid 69477 00:06:04.239 22:26:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:04.239 22:26:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69477' 00:06:04.239 22:26:04 -- common/autotest_common.sh@955 -- # kill 69477 00:06:04.239 22:26:04 -- common/autotest_common.sh@960 -- # wait 69477 00:06:05.175 22:26:05 -- event/cpu_locks.sh@108 -- # killprocess 69505 00:06:05.175 22:26:05 -- common/autotest_common.sh@936 -- # '[' -z 69505 ']' 00:06:05.175 22:26:05 -- common/autotest_common.sh@940 -- # kill -0 69505 00:06:05.175 22:26:05 -- common/autotest_common.sh@941 -- # uname 00:06:05.175 22:26:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:05.175 22:26:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69505 00:06:05.175 22:26:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:05.175 22:26:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:05.175 killing process with pid 69505 00:06:05.175 22:26:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69505' 00:06:05.175 22:26:05 -- common/autotest_common.sh@955 -- # kill 69505 00:06:05.175 22:26:05 -- common/autotest_common.sh@960 -- # wait 69505 00:06:05.743 00:06:05.743 real 0m4.287s 00:06:05.743 user 0m4.566s 00:06:05.743 sys 0m1.231s 00:06:05.743 22:26:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.743 22:26:06 -- common/autotest_common.sh@10 -- # set +x 00:06:05.743 ************************************ 00:06:05.743 END TEST locking_app_on_unlocked_coremask 00:06:05.743 ************************************ 00:06:05.743 22:26:06 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:05.743 22:26:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.743 22:26:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.743 22:26:06 -- common/autotest_common.sh@10 -- # set +x 00:06:05.743 ************************************ 00:06:05.743 START TEST locking_app_on_locked_coremask 00:06:05.743 ************************************ 00:06:05.743 22:26:06 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:05.743 22:26:06 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=69584 00:06:05.743 22:26:06 -- event/cpu_locks.sh@116 -- # waitforlisten 69584 /var/tmp/spdk.sock 00:06:05.743 22:26:06 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.743 22:26:06 -- common/autotest_common.sh@829 -- # '[' -z 69584 ']' 00:06:05.743 22:26:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.743 22:26:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.743 22:26:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.743 22:26:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.743 22:26:06 -- common/autotest_common.sh@10 -- # set +x 00:06:05.743 [2024-11-20 22:26:06.336668] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:05.743 [2024-11-20 22:26:06.336773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69584 ] 00:06:05.743 [2024-11-20 22:26:06.473890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.003 [2024-11-20 22:26:06.529647] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:06.003 [2024-11-20 22:26:06.529806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.939 22:26:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.939 22:26:07 -- common/autotest_common.sh@862 -- # return 0 00:06:06.939 22:26:07 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=69612 00:06:06.939 22:26:07 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:06.939 22:26:07 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 69612 /var/tmp/spdk2.sock 00:06:06.939 22:26:07 -- common/autotest_common.sh@650 -- # local es=0 00:06:06.939 22:26:07 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69612 /var/tmp/spdk2.sock 00:06:06.939 22:26:07 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:06.939 22:26:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.939 22:26:07 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:06.939 22:26:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.939 22:26:07 -- common/autotest_common.sh@653 -- # waitforlisten 69612 /var/tmp/spdk2.sock 00:06:06.939 22:26:07 -- common/autotest_common.sh@829 -- # '[' -z 69612 ']' 00:06:06.939 22:26:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.939 22:26:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.939 22:26:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.939 22:26:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.939 22:26:07 -- common/autotest_common.sh@10 -- # set +x 00:06:06.939 [2024-11-20 22:26:07.352287] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:06.939 [2024-11-20 22:26:07.352388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69612 ] 00:06:06.939 [2024-11-20 22:26:07.487332] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 69584 has claimed it. 00:06:06.939 [2024-11-20 22:26:07.487405] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:07.507 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69612) - No such process 00:06:07.507 ERROR: process (pid: 69612) is no longer running 00:06:07.507 22:26:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.507 22:26:08 -- common/autotest_common.sh@862 -- # return 1 00:06:07.507 22:26:08 -- common/autotest_common.sh@653 -- # es=1 00:06:07.507 22:26:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:07.507 22:26:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:07.507 22:26:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:07.507 22:26:08 -- event/cpu_locks.sh@122 -- # locks_exist 69584 00:06:07.507 22:26:08 -- event/cpu_locks.sh@22 -- # lslocks -p 69584 00:06:07.508 22:26:08 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.768 22:26:08 -- event/cpu_locks.sh@124 -- # killprocess 69584 00:06:07.768 22:26:08 -- common/autotest_common.sh@936 -- # '[' -z 69584 ']' 00:06:07.768 22:26:08 -- common/autotest_common.sh@940 -- # kill -0 69584 00:06:07.768 22:26:08 -- common/autotest_common.sh@941 -- # uname 00:06:07.768 22:26:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:07.768 22:26:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69584 00:06:07.768 killing process with pid 69584 00:06:07.768 22:26:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:07.768 22:26:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:07.768 22:26:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69584' 00:06:07.768 22:26:08 -- common/autotest_common.sh@955 -- # kill 69584 00:06:07.768 22:26:08 -- common/autotest_common.sh@960 -- # wait 69584 00:06:08.336 ************************************ 00:06:08.336 END TEST locking_app_on_locked_coremask 00:06:08.336 ************************************ 00:06:08.336 00:06:08.336 real 0m2.594s 00:06:08.336 user 0m2.927s 00:06:08.336 sys 0m0.611s 00:06:08.336 22:26:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.336 22:26:08 -- common/autotest_common.sh@10 -- # set +x 00:06:08.337 22:26:08 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:08.337 22:26:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.337 22:26:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.337 22:26:08 -- common/autotest_common.sh@10 -- # set +x 00:06:08.337 ************************************ 00:06:08.337 START TEST locking_overlapped_coremask 00:06:08.337 ************************************ 00:06:08.337 22:26:08 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:08.337 22:26:08 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:08.337 22:26:08 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=69669 00:06:08.337 22:26:08 -- event/cpu_locks.sh@133 -- # waitforlisten 69669 /var/tmp/spdk.sock 00:06:08.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.337 22:26:08 -- common/autotest_common.sh@829 -- # '[' -z 69669 ']' 00:06:08.337 22:26:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.337 22:26:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.337 22:26:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.337 22:26:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.337 22:26:08 -- common/autotest_common.sh@10 -- # set +x 00:06:08.337 [2024-11-20 22:26:08.981937] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:08.337 [2024-11-20 22:26:08.982236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69669 ] 00:06:08.596 [2024-11-20 22:26:09.122027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.596 [2024-11-20 22:26:09.197292] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:08.596 [2024-11-20 22:26:09.197977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.596 [2024-11-20 22:26:09.198078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.596 [2024-11-20 22:26:09.198086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.546 22:26:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.546 22:26:09 -- common/autotest_common.sh@862 -- # return 0 00:06:09.546 22:26:09 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=69699 00:06:09.546 22:26:09 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:09.546 22:26:09 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 69699 /var/tmp/spdk2.sock 00:06:09.546 22:26:09 -- common/autotest_common.sh@650 -- # local es=0 00:06:09.546 22:26:09 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69699 /var/tmp/spdk2.sock 00:06:09.546 22:26:09 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:09.546 22:26:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.546 22:26:09 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:09.546 22:26:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.546 22:26:09 -- common/autotest_common.sh@653 -- # waitforlisten 69699 /var/tmp/spdk2.sock 00:06:09.546 22:26:09 -- common/autotest_common.sh@829 -- # '[' -z 69699 ']' 00:06:09.546 22:26:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.546 22:26:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.546 22:26:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.546 22:26:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.546 22:26:09 -- common/autotest_common.sh@10 -- # set +x 00:06:09.546 [2024-11-20 22:26:10.040200] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:09.546 [2024-11-20 22:26:10.040476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69699 ] 00:06:09.547 [2024-11-20 22:26:10.175426] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69669 has claimed it. 00:06:09.547 [2024-11-20 22:26:10.175524] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:10.156 ERROR: process (pid: 69699) is no longer running 00:06:10.156 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69699) - No such process 00:06:10.156 22:26:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.156 22:26:10 -- common/autotest_common.sh@862 -- # return 1 00:06:10.156 22:26:10 -- common/autotest_common.sh@653 -- # es=1 00:06:10.156 22:26:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.156 22:26:10 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:10.156 22:26:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.156 22:26:10 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:10.156 22:26:10 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:10.156 22:26:10 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:10.156 22:26:10 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:10.156 22:26:10 -- event/cpu_locks.sh@141 -- # killprocess 69669 00:06:10.157 22:26:10 -- common/autotest_common.sh@936 -- # '[' -z 69669 ']' 00:06:10.157 22:26:10 -- common/autotest_common.sh@940 -- # kill -0 69669 00:06:10.157 22:26:10 -- common/autotest_common.sh@941 -- # uname 00:06:10.157 22:26:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:10.157 22:26:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69669 00:06:10.157 22:26:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:10.157 22:26:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:10.157 22:26:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69669' 00:06:10.157 killing process with pid 69669 00:06:10.157 22:26:10 -- common/autotest_common.sh@955 -- # kill 69669 00:06:10.157 22:26:10 -- common/autotest_common.sh@960 -- # wait 69669 00:06:10.726 00:06:10.726 real 0m2.392s 00:06:10.726 user 0m6.767s 00:06:10.726 sys 0m0.475s 00:06:10.726 22:26:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.726 22:26:11 -- common/autotest_common.sh@10 -- # set +x 00:06:10.726 ************************************ 00:06:10.726 END TEST locking_overlapped_coremask 00:06:10.726 ************************************ 00:06:10.727 22:26:11 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:10.727 22:26:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:10.727 22:26:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.727 22:26:11 -- common/autotest_common.sh@10 -- # set +x 00:06:10.727 ************************************ 00:06:10.727 START TEST locking_overlapped_coremask_via_rpc 00:06:10.727 ************************************ 00:06:10.727 22:26:11 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:10.727 22:26:11 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=69745 00:06:10.727 22:26:11 -- event/cpu_locks.sh@149 -- # waitforlisten 69745 /var/tmp/spdk.sock 00:06:10.727 22:26:11 -- common/autotest_common.sh@829 -- # '[' -z 69745 ']' 00:06:10.727 22:26:11 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:10.727 22:26:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.727 22:26:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.727 22:26:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.727 22:26:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.727 22:26:11 -- common/autotest_common.sh@10 -- # set +x 00:06:10.727 [2024-11-20 22:26:11.430673] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:10.727 [2024-11-20 22:26:11.430778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69745 ] 00:06:10.986 [2024-11-20 22:26:11.569996] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.986 [2024-11-20 22:26:11.570055] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.986 [2024-11-20 22:26:11.641353] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:10.986 [2024-11-20 22:26:11.642021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.986 [2024-11-20 22:26:11.642114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.986 [2024-11-20 22:26:11.642109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.923 22:26:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.923 22:26:12 -- common/autotest_common.sh@862 -- # return 0 00:06:11.923 22:26:12 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=69775 00:06:11.923 22:26:12 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:11.923 22:26:12 -- event/cpu_locks.sh@153 -- # waitforlisten 69775 /var/tmp/spdk2.sock 00:06:11.923 22:26:12 -- common/autotest_common.sh@829 -- # '[' -z 69775 ']' 00:06:11.923 22:26:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.923 22:26:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.923 22:26:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.923 22:26:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.923 22:26:12 -- common/autotest_common.sh@10 -- # set +x 00:06:11.923 [2024-11-20 22:26:12.417752] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:11.923 [2024-11-20 22:26:12.417863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69775 ] 00:06:11.923 [2024-11-20 22:26:12.557905] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.923 [2024-11-20 22:26:12.557969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.182 [2024-11-20 22:26:12.692662] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:12.182 [2024-11-20 22:26:12.693003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.182 [2024-11-20 22:26:12.696438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:12.182 [2024-11-20 22:26:12.696442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.750 22:26:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.750 22:26:13 -- common/autotest_common.sh@862 -- # return 0 00:06:12.750 22:26:13 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:12.750 22:26:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.750 22:26:13 -- common/autotest_common.sh@10 -- # set +x 00:06:12.750 22:26:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.750 22:26:13 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:12.750 22:26:13 -- common/autotest_common.sh@650 -- # local es=0 00:06:12.750 22:26:13 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:12.750 22:26:13 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:12.750 22:26:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.750 22:26:13 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:12.750 22:26:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.750 22:26:13 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:12.750 22:26:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.750 22:26:13 -- common/autotest_common.sh@10 -- # set +x 00:06:12.750 [2024-11-20 22:26:13.373408] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69745 has claimed it. 00:06:12.750 request: 00:06:12.750 { 00:06:12.750 "method": "framework_enable_cpumask_locks", 00:06:12.750 "params": {} 00:06:12.750 } 00:06:12.750 Got JSON-RPC error response 00:06:12.750 GoRPCClient: error on JSON-RPC call 00:06:12.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.750 2024/11/20 22:26:13 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:12.750 22:26:13 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:12.750 22:26:13 -- common/autotest_common.sh@653 -- # es=1 00:06:12.750 22:26:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:12.750 22:26:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:12.750 22:26:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:12.750 22:26:13 -- event/cpu_locks.sh@158 -- # waitforlisten 69745 /var/tmp/spdk.sock 00:06:12.750 22:26:13 -- common/autotest_common.sh@829 -- # '[' -z 69745 ']' 00:06:12.750 22:26:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.750 22:26:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.750 22:26:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.750 22:26:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.750 22:26:13 -- common/autotest_common.sh@10 -- # set +x 00:06:13.009 22:26:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.010 22:26:13 -- common/autotest_common.sh@862 -- # return 0 00:06:13.010 22:26:13 -- event/cpu_locks.sh@159 -- # waitforlisten 69775 /var/tmp/spdk2.sock 00:06:13.010 22:26:13 -- common/autotest_common.sh@829 -- # '[' -z 69775 ']' 00:06:13.010 22:26:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.010 22:26:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.010 22:26:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.010 22:26:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.010 22:26:13 -- common/autotest_common.sh@10 -- # set +x 00:06:13.269 22:26:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.269 22:26:13 -- common/autotest_common.sh@862 -- # return 0 00:06:13.269 22:26:13 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:13.269 ************************************ 00:06:13.269 END TEST locking_overlapped_coremask_via_rpc 00:06:13.269 ************************************ 00:06:13.269 22:26:13 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:13.269 22:26:13 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:13.269 22:26:13 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:13.269 00:06:13.269 real 0m2.545s 00:06:13.269 user 0m1.290s 00:06:13.269 sys 0m0.182s 00:06:13.269 22:26:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.269 22:26:13 -- common/autotest_common.sh@10 -- # set +x 00:06:13.269 22:26:13 -- event/cpu_locks.sh@174 -- # cleanup 00:06:13.269 22:26:13 -- event/cpu_locks.sh@15 -- # [[ -z 69745 ]] 00:06:13.269 22:26:13 -- event/cpu_locks.sh@15 -- # killprocess 69745 00:06:13.269 22:26:13 -- common/autotest_common.sh@936 -- # '[' -z 69745 ']' 00:06:13.269 22:26:13 -- common/autotest_common.sh@940 -- # kill -0 69745 00:06:13.269 22:26:13 -- common/autotest_common.sh@941 -- # uname 00:06:13.269 22:26:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:13.269 22:26:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69745 00:06:13.269 killing process with pid 69745 00:06:13.269 22:26:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:13.269 22:26:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:13.269 22:26:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69745' 00:06:13.269 22:26:13 -- common/autotest_common.sh@955 -- # kill 69745 00:06:13.269 22:26:13 -- common/autotest_common.sh@960 -- # wait 69745 00:06:13.838 22:26:14 -- event/cpu_locks.sh@16 -- # [[ -z 69775 ]] 00:06:13.838 22:26:14 -- event/cpu_locks.sh@16 -- # killprocess 69775 00:06:13.838 22:26:14 -- common/autotest_common.sh@936 -- # '[' -z 69775 ']' 00:06:13.838 22:26:14 -- common/autotest_common.sh@940 -- # kill -0 69775 00:06:13.838 22:26:14 -- common/autotest_common.sh@941 -- # uname 00:06:13.838 22:26:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:13.838 22:26:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69775 00:06:13.838 killing process with pid 69775 00:06:13.838 22:26:14 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:13.838 22:26:14 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:13.838 22:26:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69775' 00:06:13.838 22:26:14 -- common/autotest_common.sh@955 -- # kill 69775 00:06:13.838 22:26:14 -- common/autotest_common.sh@960 -- # wait 69775 00:06:14.406 22:26:14 -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.406 Process with pid 69745 is not found 00:06:14.406 Process with pid 69775 is not found 00:06:14.406 22:26:14 -- event/cpu_locks.sh@1 -- # cleanup 00:06:14.406 22:26:14 -- event/cpu_locks.sh@15 -- # [[ -z 69745 ]] 00:06:14.406 22:26:14 -- event/cpu_locks.sh@15 -- # killprocess 69745 00:06:14.406 22:26:14 -- common/autotest_common.sh@936 -- # '[' -z 69745 ']' 00:06:14.406 22:26:14 -- common/autotest_common.sh@940 -- # kill -0 69745 00:06:14.406 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (69745) - No such process 00:06:14.406 22:26:14 -- common/autotest_common.sh@963 -- # echo 'Process with pid 69745 is not found' 00:06:14.406 22:26:14 -- event/cpu_locks.sh@16 -- # [[ -z 69775 ]] 00:06:14.406 22:26:14 -- event/cpu_locks.sh@16 -- # killprocess 69775 00:06:14.406 22:26:14 -- common/autotest_common.sh@936 -- # '[' -z 69775 ']' 00:06:14.406 22:26:14 -- common/autotest_common.sh@940 -- # kill -0 69775 00:06:14.406 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (69775) - No such process 00:06:14.406 22:26:14 -- common/autotest_common.sh@963 -- # echo 'Process with pid 69775 is not found' 00:06:14.406 22:26:14 -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.406 ************************************ 00:06:14.406 END TEST cpu_locks 00:06:14.406 ************************************ 00:06:14.406 00:06:14.406 real 0m21.347s 00:06:14.406 user 0m36.680s 00:06:14.406 sys 0m5.953s 00:06:14.406 22:26:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.406 22:26:14 -- common/autotest_common.sh@10 -- # set +x 00:06:14.406 ************************************ 00:06:14.406 END TEST event 00:06:14.406 ************************************ 00:06:14.406 00:06:14.406 real 0m49.111s 00:06:14.406 user 1m34.145s 00:06:14.406 sys 0m9.741s 00:06:14.406 22:26:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.406 22:26:14 -- common/autotest_common.sh@10 -- # set +x 00:06:14.406 22:26:14 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:14.407 22:26:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:14.407 22:26:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.407 22:26:14 -- common/autotest_common.sh@10 -- # set +x 00:06:14.407 ************************************ 00:06:14.407 START TEST thread 00:06:14.407 ************************************ 00:06:14.407 22:26:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:14.407 * Looking for test storage... 00:06:14.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:14.407 22:26:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:14.407 22:26:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:14.407 22:26:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:14.407 22:26:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:14.407 22:26:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:14.407 22:26:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:14.407 22:26:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:14.407 22:26:15 -- scripts/common.sh@335 -- # IFS=.-: 00:06:14.407 22:26:15 -- scripts/common.sh@335 -- # read -ra ver1 00:06:14.407 22:26:15 -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.407 22:26:15 -- scripts/common.sh@336 -- # read -ra ver2 00:06:14.407 22:26:15 -- scripts/common.sh@337 -- # local 'op=<' 00:06:14.407 22:26:15 -- scripts/common.sh@339 -- # ver1_l=2 00:06:14.407 22:26:15 -- scripts/common.sh@340 -- # ver2_l=1 00:06:14.407 22:26:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:14.407 22:26:15 -- scripts/common.sh@343 -- # case "$op" in 00:06:14.407 22:26:15 -- scripts/common.sh@344 -- # : 1 00:06:14.407 22:26:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:14.407 22:26:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.407 22:26:15 -- scripts/common.sh@364 -- # decimal 1 00:06:14.407 22:26:15 -- scripts/common.sh@352 -- # local d=1 00:06:14.407 22:26:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.407 22:26:15 -- scripts/common.sh@354 -- # echo 1 00:06:14.407 22:26:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:14.407 22:26:15 -- scripts/common.sh@365 -- # decimal 2 00:06:14.407 22:26:15 -- scripts/common.sh@352 -- # local d=2 00:06:14.407 22:26:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.407 22:26:15 -- scripts/common.sh@354 -- # echo 2 00:06:14.407 22:26:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:14.407 22:26:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:14.407 22:26:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:14.407 22:26:15 -- scripts/common.sh@367 -- # return 0 00:06:14.407 22:26:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.407 22:26:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:14.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.407 --rc genhtml_branch_coverage=1 00:06:14.407 --rc genhtml_function_coverage=1 00:06:14.407 --rc genhtml_legend=1 00:06:14.407 --rc geninfo_all_blocks=1 00:06:14.407 --rc geninfo_unexecuted_blocks=1 00:06:14.407 00:06:14.407 ' 00:06:14.407 22:26:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:14.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.407 --rc genhtml_branch_coverage=1 00:06:14.407 --rc genhtml_function_coverage=1 00:06:14.407 --rc genhtml_legend=1 00:06:14.407 --rc geninfo_all_blocks=1 00:06:14.407 --rc geninfo_unexecuted_blocks=1 00:06:14.407 00:06:14.407 ' 00:06:14.407 22:26:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:14.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.407 --rc genhtml_branch_coverage=1 00:06:14.407 --rc genhtml_function_coverage=1 00:06:14.407 --rc genhtml_legend=1 00:06:14.407 --rc geninfo_all_blocks=1 00:06:14.407 --rc geninfo_unexecuted_blocks=1 00:06:14.407 00:06:14.407 ' 00:06:14.407 22:26:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:14.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.407 --rc genhtml_branch_coverage=1 00:06:14.407 --rc genhtml_function_coverage=1 00:06:14.407 --rc genhtml_legend=1 00:06:14.407 --rc geninfo_all_blocks=1 00:06:14.407 --rc geninfo_unexecuted_blocks=1 00:06:14.407 00:06:14.407 ' 00:06:14.407 22:26:15 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:14.407 22:26:15 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:14.407 22:26:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.407 22:26:15 -- common/autotest_common.sh@10 -- # set +x 00:06:14.407 ************************************ 00:06:14.407 START TEST thread_poller_perf 00:06:14.407 ************************************ 00:06:14.407 22:26:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:14.407 [2024-11-20 22:26:15.128587] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:14.407 [2024-11-20 22:26:15.128813] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69934 ] 00:06:14.666 [2024-11-20 22:26:15.262806] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.666 [2024-11-20 22:26:15.330085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.666 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:16.043 [2024-11-20T22:26:16.777Z] ====================================== 00:06:16.043 [2024-11-20T22:26:16.777Z] busy:2211054984 (cyc) 00:06:16.043 [2024-11-20T22:26:16.777Z] total_run_count: 392000 00:06:16.043 [2024-11-20T22:26:16.777Z] tsc_hz: 2200000000 (cyc) 00:06:16.043 [2024-11-20T22:26:16.777Z] ====================================== 00:06:16.043 [2024-11-20T22:26:16.777Z] poller_cost: 5640 (cyc), 2563 (nsec) 00:06:16.043 00:06:16.043 real 0m1.309s 00:06:16.043 user 0m1.133s 00:06:16.043 sys 0m0.068s 00:06:16.043 22:26:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.043 22:26:16 -- common/autotest_common.sh@10 -- # set +x 00:06:16.043 ************************************ 00:06:16.043 END TEST thread_poller_perf 00:06:16.043 ************************************ 00:06:16.043 22:26:16 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.043 22:26:16 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:16.043 22:26:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.043 22:26:16 -- common/autotest_common.sh@10 -- # set +x 00:06:16.043 ************************************ 00:06:16.043 START TEST thread_poller_perf 00:06:16.043 ************************************ 00:06:16.043 22:26:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.043 [2024-11-20 22:26:16.495127] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:16.043 [2024-11-20 22:26:16.495243] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69964 ] 00:06:16.043 [2024-11-20 22:26:16.632939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.043 [2024-11-20 22:26:16.700934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.043 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:17.419 [2024-11-20T22:26:18.153Z] ====================================== 00:06:17.419 [2024-11-20T22:26:18.153Z] busy:2202598122 (cyc) 00:06:17.419 [2024-11-20T22:26:18.153Z] total_run_count: 5369000 00:06:17.419 [2024-11-20T22:26:18.153Z] tsc_hz: 2200000000 (cyc) 00:06:17.419 [2024-11-20T22:26:18.153Z] ====================================== 00:06:17.419 [2024-11-20T22:26:18.153Z] poller_cost: 410 (cyc), 186 (nsec) 00:06:17.419 ************************************ 00:06:17.419 END TEST thread_poller_perf 00:06:17.419 ************************************ 00:06:17.419 00:06:17.419 real 0m1.306s 00:06:17.419 user 0m1.133s 00:06:17.419 sys 0m0.066s 00:06:17.419 22:26:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.419 22:26:17 -- common/autotest_common.sh@10 -- # set +x 00:06:17.419 22:26:17 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:17.419 ************************************ 00:06:17.419 END TEST thread 00:06:17.419 ************************************ 00:06:17.419 00:06:17.419 real 0m2.871s 00:06:17.419 user 0m2.395s 00:06:17.419 sys 0m0.259s 00:06:17.419 22:26:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.419 22:26:17 -- common/autotest_common.sh@10 -- # set +x 00:06:17.419 22:26:17 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:17.419 22:26:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:17.419 22:26:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.419 22:26:17 -- common/autotest_common.sh@10 -- # set +x 00:06:17.419 ************************************ 00:06:17.419 START TEST accel 00:06:17.419 ************************************ 00:06:17.419 22:26:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:17.419 * Looking for test storage... 00:06:17.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:17.419 22:26:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:17.419 22:26:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:17.419 22:26:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:17.419 22:26:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:17.419 22:26:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:17.419 22:26:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:17.419 22:26:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:17.419 22:26:18 -- scripts/common.sh@335 -- # IFS=.-: 00:06:17.419 22:26:18 -- scripts/common.sh@335 -- # read -ra ver1 00:06:17.419 22:26:18 -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.419 22:26:18 -- scripts/common.sh@336 -- # read -ra ver2 00:06:17.419 22:26:18 -- scripts/common.sh@337 -- # local 'op=<' 00:06:17.419 22:26:18 -- scripts/common.sh@339 -- # ver1_l=2 00:06:17.419 22:26:18 -- scripts/common.sh@340 -- # ver2_l=1 00:06:17.419 22:26:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:17.419 22:26:18 -- scripts/common.sh@343 -- # case "$op" in 00:06:17.419 22:26:18 -- scripts/common.sh@344 -- # : 1 00:06:17.419 22:26:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:17.419 22:26:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.419 22:26:18 -- scripts/common.sh@364 -- # decimal 1 00:06:17.419 22:26:18 -- scripts/common.sh@352 -- # local d=1 00:06:17.419 22:26:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.419 22:26:18 -- scripts/common.sh@354 -- # echo 1 00:06:17.419 22:26:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:17.419 22:26:18 -- scripts/common.sh@365 -- # decimal 2 00:06:17.419 22:26:18 -- scripts/common.sh@352 -- # local d=2 00:06:17.419 22:26:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.419 22:26:18 -- scripts/common.sh@354 -- # echo 2 00:06:17.419 22:26:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:17.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.419 22:26:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:17.419 22:26:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:17.419 22:26:18 -- scripts/common.sh@367 -- # return 0 00:06:17.419 22:26:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.419 22:26:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:17.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.419 --rc genhtml_branch_coverage=1 00:06:17.419 --rc genhtml_function_coverage=1 00:06:17.419 --rc genhtml_legend=1 00:06:17.419 --rc geninfo_all_blocks=1 00:06:17.419 --rc geninfo_unexecuted_blocks=1 00:06:17.419 00:06:17.419 ' 00:06:17.419 22:26:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:17.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.419 --rc genhtml_branch_coverage=1 00:06:17.419 --rc genhtml_function_coverage=1 00:06:17.419 --rc genhtml_legend=1 00:06:17.419 --rc geninfo_all_blocks=1 00:06:17.419 --rc geninfo_unexecuted_blocks=1 00:06:17.419 00:06:17.419 ' 00:06:17.419 22:26:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:17.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.419 --rc genhtml_branch_coverage=1 00:06:17.419 --rc genhtml_function_coverage=1 00:06:17.419 --rc genhtml_legend=1 00:06:17.419 --rc geninfo_all_blocks=1 00:06:17.419 --rc geninfo_unexecuted_blocks=1 00:06:17.419 00:06:17.419 ' 00:06:17.419 22:26:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:17.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.419 --rc genhtml_branch_coverage=1 00:06:17.419 --rc genhtml_function_coverage=1 00:06:17.419 --rc genhtml_legend=1 00:06:17.419 --rc geninfo_all_blocks=1 00:06:17.419 --rc geninfo_unexecuted_blocks=1 00:06:17.419 00:06:17.419 ' 00:06:17.419 22:26:18 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:17.419 22:26:18 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:17.419 22:26:18 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:17.419 22:26:18 -- accel/accel.sh@59 -- # spdk_tgt_pid=70050 00:06:17.419 22:26:18 -- accel/accel.sh@60 -- # waitforlisten 70050 00:06:17.419 22:26:18 -- common/autotest_common.sh@829 -- # '[' -z 70050 ']' 00:06:17.419 22:26:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.419 22:26:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.419 22:26:18 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:17.419 22:26:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.419 22:26:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.419 22:26:18 -- accel/accel.sh@58 -- # build_accel_config 00:06:17.419 22:26:18 -- common/autotest_common.sh@10 -- # set +x 00:06:17.419 22:26:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.419 22:26:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.419 22:26:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.419 22:26:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.419 22:26:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.419 22:26:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.419 22:26:18 -- accel/accel.sh@42 -- # jq -r . 00:06:17.419 [2024-11-20 22:26:18.126987] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:17.419 [2024-11-20 22:26:18.127252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70050 ] 00:06:17.678 [2024-11-20 22:26:18.262695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.678 [2024-11-20 22:26:18.332510] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:17.678 [2024-11-20 22:26:18.332981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.616 22:26:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.616 22:26:19 -- common/autotest_common.sh@862 -- # return 0 00:06:18.616 22:26:19 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:18.616 22:26:19 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:18.616 22:26:19 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:18.616 22:26:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.616 22:26:19 -- common/autotest_common.sh@10 -- # set +x 00:06:18.616 22:26:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.616 22:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.616 22:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.616 22:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.616 22:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.616 22:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.616 22:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.616 22:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.616 22:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.616 22:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.616 22:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.616 22:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.616 22:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.616 22:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.616 22:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.616 22:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.616 22:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.616 22:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.616 22:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.616 22:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.616 22:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.616 22:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.616 22:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.616 22:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.616 22:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.616 22:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.616 22:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.616 22:26:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # IFS== 00:06:18.616 22:26:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.616 22:26:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.616 22:26:19 -- accel/accel.sh@67 -- # killprocess 70050 00:06:18.616 22:26:19 -- common/autotest_common.sh@936 -- # '[' -z 70050 ']' 00:06:18.616 22:26:19 -- common/autotest_common.sh@940 -- # kill -0 70050 00:06:18.616 22:26:19 -- common/autotest_common.sh@941 -- # uname 00:06:18.616 22:26:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:18.616 22:26:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70050 00:06:18.616 22:26:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:18.616 22:26:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:18.616 22:26:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70050' 00:06:18.616 killing process with pid 70050 00:06:18.616 22:26:19 -- common/autotest_common.sh@955 -- # kill 70050 00:06:18.616 22:26:19 -- common/autotest_common.sh@960 -- # wait 70050 00:06:19.184 22:26:19 -- accel/accel.sh@68 -- # trap - ERR 00:06:19.184 22:26:19 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:19.184 22:26:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:19.184 22:26:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.184 22:26:19 -- common/autotest_common.sh@10 -- # set +x 00:06:19.184 22:26:19 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:19.184 22:26:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:19.184 22:26:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.184 22:26:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.184 22:26:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.184 22:26:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.184 22:26:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.184 22:26:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.184 22:26:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.184 22:26:19 -- accel/accel.sh@42 -- # jq -r . 00:06:19.184 22:26:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:19.184 22:26:19 -- common/autotest_common.sh@10 -- # set +x 00:06:19.184 22:26:19 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:19.184 22:26:19 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:19.184 22:26:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.184 22:26:19 -- common/autotest_common.sh@10 -- # set +x 00:06:19.184 ************************************ 00:06:19.184 START TEST accel_missing_filename 00:06:19.184 ************************************ 00:06:19.184 22:26:19 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:19.184 22:26:19 -- common/autotest_common.sh@650 -- # local es=0 00:06:19.184 22:26:19 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:19.184 22:26:19 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:19.184 22:26:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.184 22:26:19 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:19.184 22:26:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.184 22:26:19 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:19.184 22:26:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:19.184 22:26:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.184 22:26:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.184 22:26:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.184 22:26:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.184 22:26:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.184 22:26:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.184 22:26:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.184 22:26:19 -- accel/accel.sh@42 -- # jq -r . 00:06:19.184 [2024-11-20 22:26:19.814852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:19.184 [2024-11-20 22:26:19.815084] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70115 ] 00:06:19.442 [2024-11-20 22:26:19.950069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.442 [2024-11-20 22:26:20.028165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.442 [2024-11-20 22:26:20.100642] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.701 [2024-11-20 22:26:20.203663] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:19.701 A filename is required. 00:06:19.701 22:26:20 -- common/autotest_common.sh@653 -- # es=234 00:06:19.702 22:26:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.702 22:26:20 -- common/autotest_common.sh@662 -- # es=106 00:06:19.702 22:26:20 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:19.702 22:26:20 -- common/autotest_common.sh@670 -- # es=1 00:06:19.702 22:26:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.702 00:06:19.702 real 0m0.484s 00:06:19.702 user 0m0.290s 00:06:19.702 sys 0m0.138s 00:06:19.702 22:26:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:19.702 22:26:20 -- common/autotest_common.sh@10 -- # set +x 00:06:19.702 ************************************ 00:06:19.702 END TEST accel_missing_filename 00:06:19.702 ************************************ 00:06:19.702 22:26:20 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:19.702 22:26:20 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:19.702 22:26:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.702 22:26:20 -- common/autotest_common.sh@10 -- # set +x 00:06:19.702 ************************************ 00:06:19.702 START TEST accel_compress_verify 00:06:19.702 ************************************ 00:06:19.702 22:26:20 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:19.702 22:26:20 -- common/autotest_common.sh@650 -- # local es=0 00:06:19.702 22:26:20 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:19.702 22:26:20 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:19.702 22:26:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.702 22:26:20 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:19.702 22:26:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.702 22:26:20 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:19.702 22:26:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:19.702 22:26:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.702 22:26:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.702 22:26:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.702 22:26:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.702 22:26:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.702 22:26:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.702 22:26:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.702 22:26:20 -- accel/accel.sh@42 -- # jq -r . 00:06:19.702 [2024-11-20 22:26:20.351133] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:19.702 [2024-11-20 22:26:20.351246] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70145 ] 00:06:19.961 [2024-11-20 22:26:20.488979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.961 [2024-11-20 22:26:20.559869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.961 [2024-11-20 22:26:20.634117] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:20.220 [2024-11-20 22:26:20.737691] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:20.220 00:06:20.220 Compression does not support the verify option, aborting. 00:06:20.220 22:26:20 -- common/autotest_common.sh@653 -- # es=161 00:06:20.220 22:26:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.220 22:26:20 -- common/autotest_common.sh@662 -- # es=33 00:06:20.220 22:26:20 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:20.220 22:26:20 -- common/autotest_common.sh@670 -- # es=1 00:06:20.220 22:26:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.220 ************************************ 00:06:20.220 END TEST accel_compress_verify 00:06:20.220 ************************************ 00:06:20.220 00:06:20.220 real 0m0.515s 00:06:20.220 user 0m0.327s 00:06:20.220 sys 0m0.136s 00:06:20.220 22:26:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.220 22:26:20 -- common/autotest_common.sh@10 -- # set +x 00:06:20.220 22:26:20 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:20.220 22:26:20 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:20.220 22:26:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.220 22:26:20 -- common/autotest_common.sh@10 -- # set +x 00:06:20.220 ************************************ 00:06:20.220 START TEST accel_wrong_workload 00:06:20.220 ************************************ 00:06:20.220 22:26:20 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:20.220 22:26:20 -- common/autotest_common.sh@650 -- # local es=0 00:06:20.220 22:26:20 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:20.220 22:26:20 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:20.220 22:26:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.220 22:26:20 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:20.220 22:26:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.220 22:26:20 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:20.220 22:26:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:20.220 22:26:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.220 22:26:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.220 22:26:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.220 22:26:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.220 22:26:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.220 22:26:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.220 22:26:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.220 22:26:20 -- accel/accel.sh@42 -- # jq -r . 00:06:20.220 Unsupported workload type: foobar 00:06:20.220 [2024-11-20 22:26:20.914996] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:20.220 accel_perf options: 00:06:20.220 [-h help message] 00:06:20.220 [-q queue depth per core] 00:06:20.220 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:20.220 [-T number of threads per core 00:06:20.220 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:20.220 [-t time in seconds] 00:06:20.220 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:20.220 [ dif_verify, , dif_generate, dif_generate_copy 00:06:20.220 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:20.220 [-l for compress/decompress workloads, name of uncompressed input file 00:06:20.220 [-S for crc32c workload, use this seed value (default 0) 00:06:20.220 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:20.220 [-f for fill workload, use this BYTE value (default 255) 00:06:20.220 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:20.220 [-y verify result if this switch is on] 00:06:20.220 [-a tasks to allocate per core (default: same value as -q)] 00:06:20.220 Can be used to spread operations across a wider range of memory. 00:06:20.220 22:26:20 -- common/autotest_common.sh@653 -- # es=1 00:06:20.220 22:26:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.220 22:26:20 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:20.220 22:26:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.220 00:06:20.220 real 0m0.027s 00:06:20.220 user 0m0.016s 00:06:20.220 sys 0m0.011s 00:06:20.220 22:26:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.220 22:26:20 -- common/autotest_common.sh@10 -- # set +x 00:06:20.220 ************************************ 00:06:20.220 END TEST accel_wrong_workload 00:06:20.220 ************************************ 00:06:20.480 22:26:20 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:20.480 22:26:20 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:20.480 22:26:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.480 22:26:20 -- common/autotest_common.sh@10 -- # set +x 00:06:20.480 ************************************ 00:06:20.480 START TEST accel_negative_buffers 00:06:20.480 ************************************ 00:06:20.480 22:26:20 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:20.480 22:26:20 -- common/autotest_common.sh@650 -- # local es=0 00:06:20.480 22:26:20 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:20.480 22:26:20 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:20.480 22:26:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.480 22:26:20 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:20.480 22:26:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.480 22:26:20 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:20.480 22:26:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:20.480 22:26:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.480 22:26:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.480 22:26:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.480 22:26:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.480 22:26:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.480 22:26:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.480 22:26:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.480 22:26:20 -- accel/accel.sh@42 -- # jq -r . 00:06:20.480 -x option must be non-negative. 00:06:20.480 [2024-11-20 22:26:20.990233] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:20.480 accel_perf options: 00:06:20.480 [-h help message] 00:06:20.480 [-q queue depth per core] 00:06:20.480 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:20.480 [-T number of threads per core 00:06:20.480 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:20.480 [-t time in seconds] 00:06:20.480 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:20.480 [ dif_verify, , dif_generate, dif_generate_copy 00:06:20.480 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:20.480 [-l for compress/decompress workloads, name of uncompressed input file 00:06:20.480 [-S for crc32c workload, use this seed value (default 0) 00:06:20.480 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:20.480 [-f for fill workload, use this BYTE value (default 255) 00:06:20.480 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:20.480 [-y verify result if this switch is on] 00:06:20.480 [-a tasks to allocate per core (default: same value as -q)] 00:06:20.480 Can be used to spread operations across a wider range of memory. 00:06:20.480 22:26:20 -- common/autotest_common.sh@653 -- # es=1 00:06:20.480 22:26:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.480 22:26:20 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:20.480 22:26:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.480 00:06:20.480 real 0m0.028s 00:06:20.480 user 0m0.016s 00:06:20.480 sys 0m0.012s 00:06:20.480 ************************************ 00:06:20.480 END TEST accel_negative_buffers 00:06:20.480 ************************************ 00:06:20.481 22:26:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.481 22:26:20 -- common/autotest_common.sh@10 -- # set +x 00:06:20.481 22:26:21 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:20.481 22:26:21 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:20.481 22:26:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.481 22:26:21 -- common/autotest_common.sh@10 -- # set +x 00:06:20.481 ************************************ 00:06:20.481 START TEST accel_crc32c 00:06:20.481 ************************************ 00:06:20.481 22:26:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:20.481 22:26:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.481 22:26:21 -- accel/accel.sh@17 -- # local accel_module 00:06:20.481 22:26:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:20.481 22:26:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:20.481 22:26:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.481 22:26:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.481 22:26:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.481 22:26:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.481 22:26:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.481 22:26:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.481 22:26:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.481 22:26:21 -- accel/accel.sh@42 -- # jq -r . 00:06:20.481 [2024-11-20 22:26:21.066223] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:20.481 [2024-11-20 22:26:21.066332] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70198 ] 00:06:20.481 [2024-11-20 22:26:21.205238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.739 [2024-11-20 22:26:21.282721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.117 22:26:22 -- accel/accel.sh@18 -- # out=' 00:06:22.117 SPDK Configuration: 00:06:22.117 Core mask: 0x1 00:06:22.117 00:06:22.117 Accel Perf Configuration: 00:06:22.117 Workload Type: crc32c 00:06:22.117 CRC-32C seed: 32 00:06:22.117 Transfer size: 4096 bytes 00:06:22.117 Vector count 1 00:06:22.117 Module: software 00:06:22.117 Queue depth: 32 00:06:22.117 Allocate depth: 32 00:06:22.117 # threads/core: 1 00:06:22.117 Run time: 1 seconds 00:06:22.117 Verify: Yes 00:06:22.117 00:06:22.117 Running for 1 seconds... 00:06:22.117 00:06:22.117 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:22.117 ------------------------------------------------------------------------------------ 00:06:22.117 0,0 548352/s 2142 MiB/s 0 0 00:06:22.117 ==================================================================================== 00:06:22.117 Total 548352/s 2142 MiB/s 0 0' 00:06:22.117 22:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:22.117 22:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:22.117 22:26:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:22.117 22:26:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:22.117 22:26:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.117 22:26:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.117 22:26:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.118 22:26:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.118 22:26:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.118 22:26:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.118 22:26:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.118 22:26:22 -- accel/accel.sh@42 -- # jq -r . 00:06:22.118 [2024-11-20 22:26:22.561383] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:22.118 [2024-11-20 22:26:22.561487] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70223 ] 00:06:22.118 [2024-11-20 22:26:22.696543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.118 [2024-11-20 22:26:22.748328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.118 22:26:22 -- accel/accel.sh@21 -- # val= 00:06:22.118 22:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:22.118 22:26:22 -- accel/accel.sh@21 -- # val= 00:06:22.118 22:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:22.118 22:26:22 -- accel/accel.sh@21 -- # val=0x1 00:06:22.118 22:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:22.118 22:26:22 -- accel/accel.sh@21 -- # val= 00:06:22.118 22:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:22.118 22:26:22 -- accel/accel.sh@21 -- # val= 00:06:22.118 22:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:22.118 22:26:22 -- accel/accel.sh@21 -- # val=crc32c 00:06:22.118 22:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.118 22:26:22 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:22.118 22:26:22 -- accel/accel.sh@21 -- # val=32 00:06:22.118 22:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:22.118 22:26:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:22.118 22:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:22.118 22:26:22 -- accel/accel.sh@21 -- # val= 00:06:22.118 22:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:22.118 22:26:22 -- accel/accel.sh@21 -- # val=software 00:06:22.118 22:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.118 22:26:22 -- accel/accel.sh@23 -- # accel_module=software 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:22.118 22:26:22 -- accel/accel.sh@21 -- # val=32 00:06:22.118 22:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:22.118 22:26:22 -- accel/accel.sh@21 -- # val=32 00:06:22.118 22:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:22.118 22:26:22 -- accel/accel.sh@21 -- # val=1 00:06:22.118 22:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:22.118 22:26:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:22.118 22:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:22.118 22:26:22 -- accel/accel.sh@21 -- # val=Yes 00:06:22.118 22:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:22.118 22:26:22 -- accel/accel.sh@21 -- # val= 00:06:22.118 22:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:22.118 22:26:22 -- accel/accel.sh@21 -- # val= 00:06:22.118 22:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:22.118 22:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:23.498 22:26:23 -- accel/accel.sh@21 -- # val= 00:06:23.498 22:26:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.498 22:26:23 -- accel/accel.sh@20 -- # IFS=: 00:06:23.498 22:26:23 -- accel/accel.sh@20 -- # read -r var val 00:06:23.498 22:26:23 -- accel/accel.sh@21 -- # val= 00:06:23.498 22:26:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.498 22:26:23 -- accel/accel.sh@20 -- # IFS=: 00:06:23.498 22:26:23 -- accel/accel.sh@20 -- # read -r var val 00:06:23.498 22:26:23 -- accel/accel.sh@21 -- # val= 00:06:23.498 22:26:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.498 22:26:23 -- accel/accel.sh@20 -- # IFS=: 00:06:23.498 22:26:23 -- accel/accel.sh@20 -- # read -r var val 00:06:23.498 22:26:23 -- accel/accel.sh@21 -- # val= 00:06:23.498 22:26:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.498 22:26:23 -- accel/accel.sh@20 -- # IFS=: 00:06:23.498 22:26:23 -- accel/accel.sh@20 -- # read -r var val 00:06:23.498 22:26:23 -- accel/accel.sh@21 -- # val= 00:06:23.498 22:26:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.498 22:26:23 -- accel/accel.sh@20 -- # IFS=: 00:06:23.498 22:26:23 -- accel/accel.sh@20 -- # read -r var val 00:06:23.498 ************************************ 00:06:23.498 END TEST accel_crc32c 00:06:23.498 ************************************ 00:06:23.498 22:26:23 -- accel/accel.sh@21 -- # val= 00:06:23.498 22:26:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.498 22:26:23 -- accel/accel.sh@20 -- # IFS=: 00:06:23.498 22:26:23 -- accel/accel.sh@20 -- # read -r var val 00:06:23.498 22:26:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:23.498 22:26:23 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:23.498 22:26:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.498 00:06:23.498 real 0m2.885s 00:06:23.498 user 0m2.427s 00:06:23.498 sys 0m0.258s 00:06:23.498 22:26:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.498 22:26:23 -- common/autotest_common.sh@10 -- # set +x 00:06:23.498 22:26:23 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:23.498 22:26:23 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:23.498 22:26:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.498 22:26:23 -- common/autotest_common.sh@10 -- # set +x 00:06:23.498 ************************************ 00:06:23.498 START TEST accel_crc32c_C2 00:06:23.498 ************************************ 00:06:23.498 22:26:23 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:23.498 22:26:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.498 22:26:23 -- accel/accel.sh@17 -- # local accel_module 00:06:23.498 22:26:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:23.498 22:26:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:23.498 22:26:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.498 22:26:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.498 22:26:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.498 22:26:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.498 22:26:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.498 22:26:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.498 22:26:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.498 22:26:23 -- accel/accel.sh@42 -- # jq -r . 00:06:23.498 [2024-11-20 22:26:24.005932] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:23.498 [2024-11-20 22:26:24.006033] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70252 ] 00:06:23.498 [2024-11-20 22:26:24.143930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.498 [2024-11-20 22:26:24.227350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.874 22:26:25 -- accel/accel.sh@18 -- # out=' 00:06:24.874 SPDK Configuration: 00:06:24.874 Core mask: 0x1 00:06:24.874 00:06:24.874 Accel Perf Configuration: 00:06:24.874 Workload Type: crc32c 00:06:24.874 CRC-32C seed: 0 00:06:24.874 Transfer size: 4096 bytes 00:06:24.874 Vector count 2 00:06:24.874 Module: software 00:06:24.874 Queue depth: 32 00:06:24.874 Allocate depth: 32 00:06:24.874 # threads/core: 1 00:06:24.874 Run time: 1 seconds 00:06:24.874 Verify: Yes 00:06:24.874 00:06:24.874 Running for 1 seconds... 00:06:24.874 00:06:24.874 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:24.874 ------------------------------------------------------------------------------------ 00:06:24.874 0,0 428800/s 3350 MiB/s 0 0 00:06:24.874 ==================================================================================== 00:06:24.874 Total 428800/s 1675 MiB/s 0 0' 00:06:24.874 22:26:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:24.874 22:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:24.874 22:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:24.874 22:26:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:24.874 22:26:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.874 22:26:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.874 22:26:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.874 22:26:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.874 22:26:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.874 22:26:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.874 22:26:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.874 22:26:25 -- accel/accel.sh@42 -- # jq -r . 00:06:24.874 [2024-11-20 22:26:25.472610] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:24.874 [2024-11-20 22:26:25.472703] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70277 ] 00:06:24.874 [2024-11-20 22:26:25.599167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.134 [2024-11-20 22:26:25.654484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.134 22:26:25 -- accel/accel.sh@21 -- # val= 00:06:25.134 22:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:25.134 22:26:25 -- accel/accel.sh@21 -- # val= 00:06:25.134 22:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:25.134 22:26:25 -- accel/accel.sh@21 -- # val=0x1 00:06:25.134 22:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:25.134 22:26:25 -- accel/accel.sh@21 -- # val= 00:06:25.134 22:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:25.134 22:26:25 -- accel/accel.sh@21 -- # val= 00:06:25.134 22:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:25.134 22:26:25 -- accel/accel.sh@21 -- # val=crc32c 00:06:25.134 22:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.134 22:26:25 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:25.134 22:26:25 -- accel/accel.sh@21 -- # val=0 00:06:25.134 22:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:25.134 22:26:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:25.134 22:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:25.134 22:26:25 -- accel/accel.sh@21 -- # val= 00:06:25.134 22:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:25.134 22:26:25 -- accel/accel.sh@21 -- # val=software 00:06:25.134 22:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.134 22:26:25 -- accel/accel.sh@23 -- # accel_module=software 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:25.134 22:26:25 -- accel/accel.sh@21 -- # val=32 00:06:25.134 22:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:25.134 22:26:25 -- accel/accel.sh@21 -- # val=32 00:06:25.134 22:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:25.134 22:26:25 -- accel/accel.sh@21 -- # val=1 00:06:25.134 22:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:25.134 22:26:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:25.134 22:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:25.134 22:26:25 -- accel/accel.sh@21 -- # val=Yes 00:06:25.134 22:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:25.134 22:26:25 -- accel/accel.sh@21 -- # val= 00:06:25.134 22:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:25.134 22:26:25 -- accel/accel.sh@21 -- # val= 00:06:25.134 22:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:25.134 22:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:26.512 22:26:26 -- accel/accel.sh@21 -- # val= 00:06:26.512 22:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.512 22:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:26.512 22:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:26.512 22:26:26 -- accel/accel.sh@21 -- # val= 00:06:26.512 22:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.512 22:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:26.512 22:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:26.512 22:26:26 -- accel/accel.sh@21 -- # val= 00:06:26.512 22:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.512 22:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:26.512 22:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:26.512 22:26:26 -- accel/accel.sh@21 -- # val= 00:06:26.512 22:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.512 22:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:26.512 ************************************ 00:06:26.512 END TEST accel_crc32c_C2 00:06:26.512 ************************************ 00:06:26.512 22:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:26.512 22:26:26 -- accel/accel.sh@21 -- # val= 00:06:26.512 22:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.512 22:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:26.512 22:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:26.512 22:26:26 -- accel/accel.sh@21 -- # val= 00:06:26.512 22:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.512 22:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:26.512 22:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:26.512 22:26:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:26.512 22:26:26 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:26.512 22:26:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.512 00:06:26.512 real 0m2.941s 00:06:26.512 user 0m2.470s 00:06:26.512 sys 0m0.270s 00:06:26.512 22:26:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.512 22:26:26 -- common/autotest_common.sh@10 -- # set +x 00:06:26.512 22:26:26 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:26.512 22:26:26 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:26.512 22:26:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.512 22:26:26 -- common/autotest_common.sh@10 -- # set +x 00:06:26.512 ************************************ 00:06:26.512 START TEST accel_copy 00:06:26.512 ************************************ 00:06:26.512 22:26:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:26.512 22:26:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.512 22:26:26 -- accel/accel.sh@17 -- # local accel_module 00:06:26.512 22:26:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:26.512 22:26:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:26.512 22:26:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.512 22:26:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.512 22:26:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.512 22:26:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.512 22:26:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.512 22:26:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.512 22:26:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.512 22:26:26 -- accel/accel.sh@42 -- # jq -r . 00:06:26.512 [2024-11-20 22:26:27.006449] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:26.512 [2024-11-20 22:26:27.006972] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70308 ] 00:06:26.512 [2024-11-20 22:26:27.142114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.512 [2024-11-20 22:26:27.203721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.889 22:26:28 -- accel/accel.sh@18 -- # out=' 00:06:27.889 SPDK Configuration: 00:06:27.889 Core mask: 0x1 00:06:27.889 00:06:27.889 Accel Perf Configuration: 00:06:27.889 Workload Type: copy 00:06:27.889 Transfer size: 4096 bytes 00:06:27.889 Vector count 1 00:06:27.889 Module: software 00:06:27.889 Queue depth: 32 00:06:27.889 Allocate depth: 32 00:06:27.889 # threads/core: 1 00:06:27.889 Run time: 1 seconds 00:06:27.889 Verify: Yes 00:06:27.889 00:06:27.889 Running for 1 seconds... 00:06:27.889 00:06:27.889 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:27.889 ------------------------------------------------------------------------------------ 00:06:27.889 0,0 394176/s 1539 MiB/s 0 0 00:06:27.889 ==================================================================================== 00:06:27.889 Total 394176/s 1539 MiB/s 0 0' 00:06:27.889 22:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:27.889 22:26:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:27.889 22:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:27.889 22:26:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:27.889 22:26:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.889 22:26:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.889 22:26:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.889 22:26:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.889 22:26:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.889 22:26:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.889 22:26:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.889 22:26:28 -- accel/accel.sh@42 -- # jq -r . 00:06:27.889 [2024-11-20 22:26:28.511428] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:27.889 [2024-11-20 22:26:28.511514] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70328 ] 00:06:28.148 [2024-11-20 22:26:28.646181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.148 [2024-11-20 22:26:28.711509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.148 22:26:28 -- accel/accel.sh@21 -- # val= 00:06:28.148 22:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:28.148 22:26:28 -- accel/accel.sh@21 -- # val= 00:06:28.148 22:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:28.148 22:26:28 -- accel/accel.sh@21 -- # val=0x1 00:06:28.148 22:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:28.148 22:26:28 -- accel/accel.sh@21 -- # val= 00:06:28.148 22:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:28.148 22:26:28 -- accel/accel.sh@21 -- # val= 00:06:28.148 22:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:28.148 22:26:28 -- accel/accel.sh@21 -- # val=copy 00:06:28.148 22:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.148 22:26:28 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:28.148 22:26:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:28.148 22:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:28.148 22:26:28 -- accel/accel.sh@21 -- # val= 00:06:28.148 22:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:28.148 22:26:28 -- accel/accel.sh@21 -- # val=software 00:06:28.148 22:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.148 22:26:28 -- accel/accel.sh@23 -- # accel_module=software 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:28.148 22:26:28 -- accel/accel.sh@21 -- # val=32 00:06:28.148 22:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:28.148 22:26:28 -- accel/accel.sh@21 -- # val=32 00:06:28.148 22:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:28.148 22:26:28 -- accel/accel.sh@21 -- # val=1 00:06:28.148 22:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:28.148 22:26:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:28.148 22:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:28.148 22:26:28 -- accel/accel.sh@21 -- # val=Yes 00:06:28.148 22:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:28.148 22:26:28 -- accel/accel.sh@21 -- # val= 00:06:28.148 22:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:28.148 22:26:28 -- accel/accel.sh@21 -- # val= 00:06:28.148 22:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:28.148 22:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.524 22:26:29 -- accel/accel.sh@21 -- # val= 00:06:29.524 22:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.524 22:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:29.524 22:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:29.524 22:26:29 -- accel/accel.sh@21 -- # val= 00:06:29.524 22:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.524 22:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:29.524 22:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:29.524 22:26:29 -- accel/accel.sh@21 -- # val= 00:06:29.524 22:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.524 22:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:29.524 22:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:29.524 22:26:29 -- accel/accel.sh@21 -- # val= 00:06:29.524 22:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.524 22:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:29.524 22:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:29.524 22:26:29 -- accel/accel.sh@21 -- # val= 00:06:29.524 22:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.524 22:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:29.524 22:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:29.524 22:26:30 -- accel/accel.sh@21 -- # val= 00:06:29.524 22:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.524 22:26:30 -- accel/accel.sh@20 -- # IFS=: 00:06:29.524 22:26:30 -- accel/accel.sh@20 -- # read -r var val 00:06:29.524 22:26:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:29.524 22:26:30 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:29.524 22:26:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.524 00:06:29.524 real 0m3.020s 00:06:29.524 user 0m2.537s 00:06:29.524 sys 0m0.279s 00:06:29.524 22:26:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.524 ************************************ 00:06:29.524 END TEST accel_copy 00:06:29.524 ************************************ 00:06:29.524 22:26:30 -- common/autotest_common.sh@10 -- # set +x 00:06:29.524 22:26:30 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:29.524 22:26:30 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:29.524 22:26:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.524 22:26:30 -- common/autotest_common.sh@10 -- # set +x 00:06:29.524 ************************************ 00:06:29.524 START TEST accel_fill 00:06:29.524 ************************************ 00:06:29.524 22:26:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:29.524 22:26:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.524 22:26:30 -- accel/accel.sh@17 -- # local accel_module 00:06:29.524 22:26:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:29.524 22:26:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:29.524 22:26:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.524 22:26:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.524 22:26:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.524 22:26:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.524 22:26:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.524 22:26:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.524 22:26:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.524 22:26:30 -- accel/accel.sh@42 -- # jq -r . 00:06:29.524 [2024-11-20 22:26:30.076305] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:29.524 [2024-11-20 22:26:30.076401] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70362 ] 00:06:29.524 [2024-11-20 22:26:30.206079] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.783 [2024-11-20 22:26:30.273851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.159 22:26:31 -- accel/accel.sh@18 -- # out=' 00:06:31.159 SPDK Configuration: 00:06:31.159 Core mask: 0x1 00:06:31.159 00:06:31.159 Accel Perf Configuration: 00:06:31.159 Workload Type: fill 00:06:31.159 Fill pattern: 0x80 00:06:31.159 Transfer size: 4096 bytes 00:06:31.159 Vector count 1 00:06:31.159 Module: software 00:06:31.159 Queue depth: 64 00:06:31.159 Allocate depth: 64 00:06:31.159 # threads/core: 1 00:06:31.159 Run time: 1 seconds 00:06:31.159 Verify: Yes 00:06:31.159 00:06:31.159 Running for 1 seconds... 00:06:31.159 00:06:31.159 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:31.159 ------------------------------------------------------------------------------------ 00:06:31.159 0,0 570944/s 2230 MiB/s 0 0 00:06:31.159 ==================================================================================== 00:06:31.159 Total 570944/s 2230 MiB/s 0 0' 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # IFS=: 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # read -r var val 00:06:31.159 22:26:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:31.159 22:26:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:31.159 22:26:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.159 22:26:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.159 22:26:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.159 22:26:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.159 22:26:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.159 22:26:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.159 22:26:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.159 22:26:31 -- accel/accel.sh@42 -- # jq -r . 00:06:31.159 [2024-11-20 22:26:31.551873] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:31.159 [2024-11-20 22:26:31.551952] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70382 ] 00:06:31.159 [2024-11-20 22:26:31.685611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.159 [2024-11-20 22:26:31.750789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.159 22:26:31 -- accel/accel.sh@21 -- # val= 00:06:31.159 22:26:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # IFS=: 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # read -r var val 00:06:31.159 22:26:31 -- accel/accel.sh@21 -- # val= 00:06:31.159 22:26:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # IFS=: 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # read -r var val 00:06:31.159 22:26:31 -- accel/accel.sh@21 -- # val=0x1 00:06:31.159 22:26:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # IFS=: 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # read -r var val 00:06:31.159 22:26:31 -- accel/accel.sh@21 -- # val= 00:06:31.159 22:26:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # IFS=: 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # read -r var val 00:06:31.159 22:26:31 -- accel/accel.sh@21 -- # val= 00:06:31.159 22:26:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # IFS=: 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # read -r var val 00:06:31.159 22:26:31 -- accel/accel.sh@21 -- # val=fill 00:06:31.159 22:26:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.159 22:26:31 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # IFS=: 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # read -r var val 00:06:31.159 22:26:31 -- accel/accel.sh@21 -- # val=0x80 00:06:31.159 22:26:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # IFS=: 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # read -r var val 00:06:31.159 22:26:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:31.159 22:26:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # IFS=: 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # read -r var val 00:06:31.159 22:26:31 -- accel/accel.sh@21 -- # val= 00:06:31.159 22:26:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # IFS=: 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # read -r var val 00:06:31.159 22:26:31 -- accel/accel.sh@21 -- # val=software 00:06:31.159 22:26:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.159 22:26:31 -- accel/accel.sh@23 -- # accel_module=software 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # IFS=: 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # read -r var val 00:06:31.159 22:26:31 -- accel/accel.sh@21 -- # val=64 00:06:31.159 22:26:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # IFS=: 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # read -r var val 00:06:31.159 22:26:31 -- accel/accel.sh@21 -- # val=64 00:06:31.159 22:26:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # IFS=: 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # read -r var val 00:06:31.159 22:26:31 -- accel/accel.sh@21 -- # val=1 00:06:31.159 22:26:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # IFS=: 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # read -r var val 00:06:31.159 22:26:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:31.159 22:26:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # IFS=: 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # read -r var val 00:06:31.159 22:26:31 -- accel/accel.sh@21 -- # val=Yes 00:06:31.159 22:26:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # IFS=: 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # read -r var val 00:06:31.159 22:26:31 -- accel/accel.sh@21 -- # val= 00:06:31.159 22:26:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # IFS=: 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # read -r var val 00:06:31.159 22:26:31 -- accel/accel.sh@21 -- # val= 00:06:31.159 22:26:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # IFS=: 00:06:31.159 22:26:31 -- accel/accel.sh@20 -- # read -r var val 00:06:32.535 22:26:33 -- accel/accel.sh@21 -- # val= 00:06:32.535 22:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.535 22:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.535 22:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.535 22:26:33 -- accel/accel.sh@21 -- # val= 00:06:32.535 22:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.535 22:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.535 22:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.535 22:26:33 -- accel/accel.sh@21 -- # val= 00:06:32.535 22:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.535 22:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.535 22:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.535 22:26:33 -- accel/accel.sh@21 -- # val= 00:06:32.535 22:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.535 22:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.535 22:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.535 22:26:33 -- accel/accel.sh@21 -- # val= 00:06:32.535 22:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.535 22:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.535 22:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.535 22:26:33 -- accel/accel.sh@21 -- # val= 00:06:32.535 22:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.535 22:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.535 22:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.536 22:26:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:32.536 22:26:33 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:32.536 ************************************ 00:06:32.536 END TEST accel_fill 00:06:32.536 ************************************ 00:06:32.536 22:26:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.536 00:06:32.536 real 0m2.990s 00:06:32.536 user 0m2.510s 00:06:32.536 sys 0m0.276s 00:06:32.536 22:26:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.536 22:26:33 -- common/autotest_common.sh@10 -- # set +x 00:06:32.536 22:26:33 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:32.536 22:26:33 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:32.536 22:26:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.536 22:26:33 -- common/autotest_common.sh@10 -- # set +x 00:06:32.536 ************************************ 00:06:32.536 START TEST accel_copy_crc32c 00:06:32.536 ************************************ 00:06:32.536 22:26:33 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:32.536 22:26:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.536 22:26:33 -- accel/accel.sh@17 -- # local accel_module 00:06:32.536 22:26:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:32.536 22:26:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:32.536 22:26:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.536 22:26:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.536 22:26:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.536 22:26:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.536 22:26:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.536 22:26:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.536 22:26:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.536 22:26:33 -- accel/accel.sh@42 -- # jq -r . 00:06:32.536 [2024-11-20 22:26:33.119772] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:32.536 [2024-11-20 22:26:33.119861] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70416 ] 00:06:32.536 [2024-11-20 22:26:33.249117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.794 [2024-11-20 22:26:33.317668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.169 22:26:34 -- accel/accel.sh@18 -- # out=' 00:06:34.169 SPDK Configuration: 00:06:34.169 Core mask: 0x1 00:06:34.169 00:06:34.169 Accel Perf Configuration: 00:06:34.169 Workload Type: copy_crc32c 00:06:34.169 CRC-32C seed: 0 00:06:34.169 Vector size: 4096 bytes 00:06:34.169 Transfer size: 4096 bytes 00:06:34.169 Vector count 1 00:06:34.169 Module: software 00:06:34.169 Queue depth: 32 00:06:34.169 Allocate depth: 32 00:06:34.169 # threads/core: 1 00:06:34.169 Run time: 1 seconds 00:06:34.169 Verify: Yes 00:06:34.169 00:06:34.169 Running for 1 seconds... 00:06:34.169 00:06:34.169 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:34.169 ------------------------------------------------------------------------------------ 00:06:34.169 0,0 309472/s 1208 MiB/s 0 0 00:06:34.169 ==================================================================================== 00:06:34.169 Total 309472/s 1208 MiB/s 0 0' 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # IFS=: 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # read -r var val 00:06:34.169 22:26:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:34.169 22:26:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:34.169 22:26:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.169 22:26:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.169 22:26:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.169 22:26:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.169 22:26:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.169 22:26:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.169 22:26:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.169 22:26:34 -- accel/accel.sh@42 -- # jq -r . 00:06:34.169 [2024-11-20 22:26:34.604368] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:34.169 [2024-11-20 22:26:34.605312] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70436 ] 00:06:34.169 [2024-11-20 22:26:34.749983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.169 [2024-11-20 22:26:34.812739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.169 22:26:34 -- accel/accel.sh@21 -- # val= 00:06:34.169 22:26:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # IFS=: 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # read -r var val 00:06:34.169 22:26:34 -- accel/accel.sh@21 -- # val= 00:06:34.169 22:26:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # IFS=: 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # read -r var val 00:06:34.169 22:26:34 -- accel/accel.sh@21 -- # val=0x1 00:06:34.169 22:26:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # IFS=: 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # read -r var val 00:06:34.169 22:26:34 -- accel/accel.sh@21 -- # val= 00:06:34.169 22:26:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # IFS=: 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # read -r var val 00:06:34.169 22:26:34 -- accel/accel.sh@21 -- # val= 00:06:34.169 22:26:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # IFS=: 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # read -r var val 00:06:34.169 22:26:34 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:34.169 22:26:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.169 22:26:34 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # IFS=: 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # read -r var val 00:06:34.169 22:26:34 -- accel/accel.sh@21 -- # val=0 00:06:34.169 22:26:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # IFS=: 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # read -r var val 00:06:34.169 22:26:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.169 22:26:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # IFS=: 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # read -r var val 00:06:34.169 22:26:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.169 22:26:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # IFS=: 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # read -r var val 00:06:34.169 22:26:34 -- accel/accel.sh@21 -- # val= 00:06:34.169 22:26:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # IFS=: 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # read -r var val 00:06:34.169 22:26:34 -- accel/accel.sh@21 -- # val=software 00:06:34.169 22:26:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.169 22:26:34 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # IFS=: 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # read -r var val 00:06:34.169 22:26:34 -- accel/accel.sh@21 -- # val=32 00:06:34.169 22:26:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # IFS=: 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # read -r var val 00:06:34.169 22:26:34 -- accel/accel.sh@21 -- # val=32 00:06:34.169 22:26:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # IFS=: 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # read -r var val 00:06:34.169 22:26:34 -- accel/accel.sh@21 -- # val=1 00:06:34.169 22:26:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.169 22:26:34 -- accel/accel.sh@20 -- # IFS=: 00:06:34.428 22:26:34 -- accel/accel.sh@20 -- # read -r var val 00:06:34.428 22:26:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.428 22:26:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.428 22:26:34 -- accel/accel.sh@20 -- # IFS=: 00:06:34.428 22:26:34 -- accel/accel.sh@20 -- # read -r var val 00:06:34.428 22:26:34 -- accel/accel.sh@21 -- # val=Yes 00:06:34.428 22:26:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.428 22:26:34 -- accel/accel.sh@20 -- # IFS=: 00:06:34.428 22:26:34 -- accel/accel.sh@20 -- # read -r var val 00:06:34.428 22:26:34 -- accel/accel.sh@21 -- # val= 00:06:34.428 22:26:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.428 22:26:34 -- accel/accel.sh@20 -- # IFS=: 00:06:34.428 22:26:34 -- accel/accel.sh@20 -- # read -r var val 00:06:34.428 22:26:34 -- accel/accel.sh@21 -- # val= 00:06:34.428 22:26:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.428 22:26:34 -- accel/accel.sh@20 -- # IFS=: 00:06:34.428 22:26:34 -- accel/accel.sh@20 -- # read -r var val 00:06:35.804 22:26:36 -- accel/accel.sh@21 -- # val= 00:06:35.804 22:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.804 22:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:35.804 22:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:35.804 22:26:36 -- accel/accel.sh@21 -- # val= 00:06:35.804 22:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.804 22:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:35.804 22:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:35.804 22:26:36 -- accel/accel.sh@21 -- # val= 00:06:35.804 22:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.804 22:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:35.804 22:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:35.804 22:26:36 -- accel/accel.sh@21 -- # val= 00:06:35.804 22:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.804 22:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:35.804 ************************************ 00:06:35.804 END TEST accel_copy_crc32c 00:06:35.804 ************************************ 00:06:35.804 22:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:35.804 22:26:36 -- accel/accel.sh@21 -- # val= 00:06:35.804 22:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.804 22:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:35.804 22:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:35.804 22:26:36 -- accel/accel.sh@21 -- # val= 00:06:35.804 22:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.804 22:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:35.804 22:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:35.804 22:26:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:35.804 22:26:36 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:35.804 22:26:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.804 00:06:35.804 real 0m3.007s 00:06:35.804 user 0m2.514s 00:06:35.804 sys 0m0.290s 00:06:35.804 22:26:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.804 22:26:36 -- common/autotest_common.sh@10 -- # set +x 00:06:35.804 22:26:36 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:35.804 22:26:36 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:35.804 22:26:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.804 22:26:36 -- common/autotest_common.sh@10 -- # set +x 00:06:35.804 ************************************ 00:06:35.804 START TEST accel_copy_crc32c_C2 00:06:35.804 ************************************ 00:06:35.804 22:26:36 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:35.804 22:26:36 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.804 22:26:36 -- accel/accel.sh@17 -- # local accel_module 00:06:35.804 22:26:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:35.804 22:26:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:35.804 22:26:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.804 22:26:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.804 22:26:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.804 22:26:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.804 22:26:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.805 22:26:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.805 22:26:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.805 22:26:36 -- accel/accel.sh@42 -- # jq -r . 00:06:35.805 [2024-11-20 22:26:36.185867] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:35.805 [2024-11-20 22:26:36.186102] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70470 ] 00:06:35.805 [2024-11-20 22:26:36.321711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.805 [2024-11-20 22:26:36.386156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.180 22:26:37 -- accel/accel.sh@18 -- # out=' 00:06:37.180 SPDK Configuration: 00:06:37.180 Core mask: 0x1 00:06:37.180 00:06:37.180 Accel Perf Configuration: 00:06:37.180 Workload Type: copy_crc32c 00:06:37.180 CRC-32C seed: 0 00:06:37.180 Vector size: 4096 bytes 00:06:37.180 Transfer size: 8192 bytes 00:06:37.180 Vector count 2 00:06:37.180 Module: software 00:06:37.180 Queue depth: 32 00:06:37.180 Allocate depth: 32 00:06:37.180 # threads/core: 1 00:06:37.180 Run time: 1 seconds 00:06:37.180 Verify: Yes 00:06:37.180 00:06:37.181 Running for 1 seconds... 00:06:37.181 00:06:37.181 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:37.181 ------------------------------------------------------------------------------------ 00:06:37.181 0,0 225280/s 1760 MiB/s 0 0 00:06:37.181 ==================================================================================== 00:06:37.181 Total 225280/s 880 MiB/s 0 0' 00:06:37.181 22:26:37 -- accel/accel.sh@20 -- # IFS=: 00:06:37.181 22:26:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:37.181 22:26:37 -- accel/accel.sh@20 -- # read -r var val 00:06:37.181 22:26:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:37.181 22:26:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.181 22:26:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.181 22:26:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.181 22:26:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.181 22:26:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.181 22:26:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.181 22:26:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.181 22:26:37 -- accel/accel.sh@42 -- # jq -r . 00:06:37.181 [2024-11-20 22:26:37.664949] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:37.181 [2024-11-20 22:26:37.665180] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70490 ] 00:06:37.181 [2024-11-20 22:26:37.798658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.181 [2024-11-20 22:26:37.858728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.439 22:26:37 -- accel/accel.sh@21 -- # val= 00:06:37.439 22:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.439 22:26:37 -- accel/accel.sh@20 -- # IFS=: 00:06:37.439 22:26:37 -- accel/accel.sh@20 -- # read -r var val 00:06:37.439 22:26:37 -- accel/accel.sh@21 -- # val= 00:06:37.439 22:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.439 22:26:37 -- accel/accel.sh@20 -- # IFS=: 00:06:37.439 22:26:37 -- accel/accel.sh@20 -- # read -r var val 00:06:37.439 22:26:37 -- accel/accel.sh@21 -- # val=0x1 00:06:37.439 22:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.439 22:26:37 -- accel/accel.sh@20 -- # IFS=: 00:06:37.439 22:26:37 -- accel/accel.sh@20 -- # read -r var val 00:06:37.439 22:26:37 -- accel/accel.sh@21 -- # val= 00:06:37.439 22:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.439 22:26:37 -- accel/accel.sh@20 -- # IFS=: 00:06:37.439 22:26:37 -- accel/accel.sh@20 -- # read -r var val 00:06:37.439 22:26:37 -- accel/accel.sh@21 -- # val= 00:06:37.439 22:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.439 22:26:37 -- accel/accel.sh@20 -- # IFS=: 00:06:37.439 22:26:37 -- accel/accel.sh@20 -- # read -r var val 00:06:37.439 22:26:37 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:37.439 22:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.439 22:26:37 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:37.439 22:26:37 -- accel/accel.sh@20 -- # IFS=: 00:06:37.439 22:26:37 -- accel/accel.sh@20 -- # read -r var val 00:06:37.439 22:26:37 -- accel/accel.sh@21 -- # val=0 00:06:37.439 22:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.439 22:26:37 -- accel/accel.sh@20 -- # IFS=: 00:06:37.439 22:26:37 -- accel/accel.sh@20 -- # read -r var val 00:06:37.439 22:26:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:37.439 22:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.439 22:26:37 -- accel/accel.sh@20 -- # IFS=: 00:06:37.439 22:26:37 -- accel/accel.sh@20 -- # read -r var val 00:06:37.439 22:26:37 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:37.439 22:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.439 22:26:37 -- accel/accel.sh@20 -- # IFS=: 00:06:37.439 22:26:37 -- accel/accel.sh@20 -- # read -r var val 00:06:37.439 22:26:37 -- accel/accel.sh@21 -- # val= 00:06:37.439 22:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.439 22:26:37 -- accel/accel.sh@20 -- # IFS=: 00:06:37.439 22:26:37 -- accel/accel.sh@20 -- # read -r var val 00:06:37.439 22:26:37 -- accel/accel.sh@21 -- # val=software 00:06:37.439 22:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.439 22:26:37 -- accel/accel.sh@23 -- # accel_module=software 00:06:37.439 22:26:37 -- accel/accel.sh@20 -- # IFS=: 00:06:37.439 22:26:37 -- accel/accel.sh@20 -- # read -r var val 00:06:37.439 22:26:37 -- accel/accel.sh@21 -- # val=32 00:06:37.439 22:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.439 22:26:37 -- accel/accel.sh@20 -- # IFS=: 00:06:37.439 22:26:37 -- accel/accel.sh@20 -- # read -r var val 00:06:37.439 22:26:37 -- accel/accel.sh@21 -- # val=32 00:06:37.439 22:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.440 22:26:37 -- accel/accel.sh@20 -- # IFS=: 00:06:37.440 22:26:37 -- accel/accel.sh@20 -- # read -r var val 00:06:37.440 22:26:37 -- accel/accel.sh@21 -- # val=1 00:06:37.440 22:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.440 22:26:37 -- accel/accel.sh@20 -- # IFS=: 00:06:37.440 22:26:37 -- accel/accel.sh@20 -- # read -r var val 00:06:37.440 22:26:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:37.440 22:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.440 22:26:37 -- accel/accel.sh@20 -- # IFS=: 00:06:37.440 22:26:37 -- accel/accel.sh@20 -- # read -r var val 00:06:37.440 22:26:37 -- accel/accel.sh@21 -- # val=Yes 00:06:37.440 22:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.440 22:26:37 -- accel/accel.sh@20 -- # IFS=: 00:06:37.440 22:26:37 -- accel/accel.sh@20 -- # read -r var val 00:06:37.440 22:26:37 -- accel/accel.sh@21 -- # val= 00:06:37.440 22:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.440 22:26:37 -- accel/accel.sh@20 -- # IFS=: 00:06:37.440 22:26:37 -- accel/accel.sh@20 -- # read -r var val 00:06:37.440 22:26:37 -- accel/accel.sh@21 -- # val= 00:06:37.440 22:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.440 22:26:37 -- accel/accel.sh@20 -- # IFS=: 00:06:37.440 22:26:37 -- accel/accel.sh@20 -- # read -r var val 00:06:38.830 22:26:39 -- accel/accel.sh@21 -- # val= 00:06:38.830 22:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.830 22:26:39 -- accel/accel.sh@20 -- # IFS=: 00:06:38.830 22:26:39 -- accel/accel.sh@20 -- # read -r var val 00:06:38.830 22:26:39 -- accel/accel.sh@21 -- # val= 00:06:38.830 22:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.830 22:26:39 -- accel/accel.sh@20 -- # IFS=: 00:06:38.830 22:26:39 -- accel/accel.sh@20 -- # read -r var val 00:06:38.830 22:26:39 -- accel/accel.sh@21 -- # val= 00:06:38.830 22:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.830 22:26:39 -- accel/accel.sh@20 -- # IFS=: 00:06:38.830 22:26:39 -- accel/accel.sh@20 -- # read -r var val 00:06:38.830 22:26:39 -- accel/accel.sh@21 -- # val= 00:06:38.830 ************************************ 00:06:38.830 END TEST accel_copy_crc32c_C2 00:06:38.830 ************************************ 00:06:38.830 22:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.830 22:26:39 -- accel/accel.sh@20 -- # IFS=: 00:06:38.830 22:26:39 -- accel/accel.sh@20 -- # read -r var val 00:06:38.830 22:26:39 -- accel/accel.sh@21 -- # val= 00:06:38.830 22:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.830 22:26:39 -- accel/accel.sh@20 -- # IFS=: 00:06:38.830 22:26:39 -- accel/accel.sh@20 -- # read -r var val 00:06:38.830 22:26:39 -- accel/accel.sh@21 -- # val= 00:06:38.830 22:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.830 22:26:39 -- accel/accel.sh@20 -- # IFS=: 00:06:38.830 22:26:39 -- accel/accel.sh@20 -- # read -r var val 00:06:38.830 22:26:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:38.830 22:26:39 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:38.830 22:26:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.830 00:06:38.830 real 0m2.955s 00:06:38.830 user 0m2.485s 00:06:38.830 sys 0m0.266s 00:06:38.830 22:26:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:38.830 22:26:39 -- common/autotest_common.sh@10 -- # set +x 00:06:38.830 22:26:39 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:38.830 22:26:39 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:38.830 22:26:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.830 22:26:39 -- common/autotest_common.sh@10 -- # set +x 00:06:38.830 ************************************ 00:06:38.830 START TEST accel_dualcast 00:06:38.830 ************************************ 00:06:38.830 22:26:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:38.830 22:26:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.830 22:26:39 -- accel/accel.sh@17 -- # local accel_module 00:06:38.830 22:26:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:38.830 22:26:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:38.830 22:26:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.830 22:26:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.830 22:26:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.830 22:26:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.830 22:26:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.830 22:26:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.830 22:26:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.830 22:26:39 -- accel/accel.sh@42 -- # jq -r . 00:06:38.830 [2024-11-20 22:26:39.191890] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:38.830 [2024-11-20 22:26:39.192135] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70524 ] 00:06:38.830 [2024-11-20 22:26:39.326552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.830 [2024-11-20 22:26:39.390411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.223 22:26:40 -- accel/accel.sh@18 -- # out=' 00:06:40.223 SPDK Configuration: 00:06:40.223 Core mask: 0x1 00:06:40.223 00:06:40.223 Accel Perf Configuration: 00:06:40.223 Workload Type: dualcast 00:06:40.223 Transfer size: 4096 bytes 00:06:40.223 Vector count 1 00:06:40.223 Module: software 00:06:40.223 Queue depth: 32 00:06:40.223 Allocate depth: 32 00:06:40.223 # threads/core: 1 00:06:40.223 Run time: 1 seconds 00:06:40.223 Verify: Yes 00:06:40.223 00:06:40.223 Running for 1 seconds... 00:06:40.223 00:06:40.223 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:40.223 ------------------------------------------------------------------------------------ 00:06:40.223 0,0 427648/s 1670 MiB/s 0 0 00:06:40.223 ==================================================================================== 00:06:40.223 Total 427648/s 1670 MiB/s 0 0' 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # IFS=: 00:06:40.223 22:26:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # read -r var val 00:06:40.223 22:26:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:40.223 22:26:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.223 22:26:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.223 22:26:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.223 22:26:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.223 22:26:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.223 22:26:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.223 22:26:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.223 22:26:40 -- accel/accel.sh@42 -- # jq -r . 00:06:40.223 [2024-11-20 22:26:40.668634] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:40.223 [2024-11-20 22:26:40.668861] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70546 ] 00:06:40.223 [2024-11-20 22:26:40.802808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.223 [2024-11-20 22:26:40.864638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.223 22:26:40 -- accel/accel.sh@21 -- # val= 00:06:40.223 22:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # IFS=: 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # read -r var val 00:06:40.223 22:26:40 -- accel/accel.sh@21 -- # val= 00:06:40.223 22:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # IFS=: 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # read -r var val 00:06:40.223 22:26:40 -- accel/accel.sh@21 -- # val=0x1 00:06:40.223 22:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # IFS=: 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # read -r var val 00:06:40.223 22:26:40 -- accel/accel.sh@21 -- # val= 00:06:40.223 22:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # IFS=: 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # read -r var val 00:06:40.223 22:26:40 -- accel/accel.sh@21 -- # val= 00:06:40.223 22:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # IFS=: 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # read -r var val 00:06:40.223 22:26:40 -- accel/accel.sh@21 -- # val=dualcast 00:06:40.223 22:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.223 22:26:40 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # IFS=: 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # read -r var val 00:06:40.223 22:26:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.223 22:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # IFS=: 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # read -r var val 00:06:40.223 22:26:40 -- accel/accel.sh@21 -- # val= 00:06:40.223 22:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # IFS=: 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # read -r var val 00:06:40.223 22:26:40 -- accel/accel.sh@21 -- # val=software 00:06:40.223 22:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.223 22:26:40 -- accel/accel.sh@23 -- # accel_module=software 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # IFS=: 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # read -r var val 00:06:40.223 22:26:40 -- accel/accel.sh@21 -- # val=32 00:06:40.223 22:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # IFS=: 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # read -r var val 00:06:40.223 22:26:40 -- accel/accel.sh@21 -- # val=32 00:06:40.223 22:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # IFS=: 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # read -r var val 00:06:40.223 22:26:40 -- accel/accel.sh@21 -- # val=1 00:06:40.223 22:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # IFS=: 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # read -r var val 00:06:40.223 22:26:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:40.223 22:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # IFS=: 00:06:40.223 22:26:40 -- accel/accel.sh@20 -- # read -r var val 00:06:40.223 22:26:40 -- accel/accel.sh@21 -- # val=Yes 00:06:40.497 22:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.497 22:26:40 -- accel/accel.sh@20 -- # IFS=: 00:06:40.497 22:26:40 -- accel/accel.sh@20 -- # read -r var val 00:06:40.497 22:26:40 -- accel/accel.sh@21 -- # val= 00:06:40.497 22:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.497 22:26:40 -- accel/accel.sh@20 -- # IFS=: 00:06:40.497 22:26:40 -- accel/accel.sh@20 -- # read -r var val 00:06:40.497 22:26:40 -- accel/accel.sh@21 -- # val= 00:06:40.497 22:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.497 22:26:40 -- accel/accel.sh@20 -- # IFS=: 00:06:40.497 22:26:40 -- accel/accel.sh@20 -- # read -r var val 00:06:41.431 22:26:42 -- accel/accel.sh@21 -- # val= 00:06:41.431 22:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.431 22:26:42 -- accel/accel.sh@20 -- # IFS=: 00:06:41.431 22:26:42 -- accel/accel.sh@20 -- # read -r var val 00:06:41.431 22:26:42 -- accel/accel.sh@21 -- # val= 00:06:41.431 22:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.431 22:26:42 -- accel/accel.sh@20 -- # IFS=: 00:06:41.432 22:26:42 -- accel/accel.sh@20 -- # read -r var val 00:06:41.432 22:26:42 -- accel/accel.sh@21 -- # val= 00:06:41.432 22:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.432 22:26:42 -- accel/accel.sh@20 -- # IFS=: 00:06:41.432 22:26:42 -- accel/accel.sh@20 -- # read -r var val 00:06:41.432 22:26:42 -- accel/accel.sh@21 -- # val= 00:06:41.432 22:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.432 22:26:42 -- accel/accel.sh@20 -- # IFS=: 00:06:41.432 22:26:42 -- accel/accel.sh@20 -- # read -r var val 00:06:41.432 22:26:42 -- accel/accel.sh@21 -- # val= 00:06:41.432 22:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.432 22:26:42 -- accel/accel.sh@20 -- # IFS=: 00:06:41.432 22:26:42 -- accel/accel.sh@20 -- # read -r var val 00:06:41.432 22:26:42 -- accel/accel.sh@21 -- # val= 00:06:41.432 22:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.432 22:26:42 -- accel/accel.sh@20 -- # IFS=: 00:06:41.432 22:26:42 -- accel/accel.sh@20 -- # read -r var val 00:06:41.432 22:26:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:41.432 22:26:42 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:41.432 22:26:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.432 00:06:41.432 real 0m2.986s 00:06:41.432 user 0m2.511s 00:06:41.432 sys 0m0.272s 00:06:41.432 22:26:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.432 ************************************ 00:06:41.432 END TEST accel_dualcast 00:06:41.432 ************************************ 00:06:41.432 22:26:42 -- common/autotest_common.sh@10 -- # set +x 00:06:41.727 22:26:42 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:41.727 22:26:42 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:41.727 22:26:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.727 22:26:42 -- common/autotest_common.sh@10 -- # set +x 00:06:41.727 ************************************ 00:06:41.727 START TEST accel_compare 00:06:41.727 ************************************ 00:06:41.727 22:26:42 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:41.727 22:26:42 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.727 22:26:42 -- accel/accel.sh@17 -- # local accel_module 00:06:41.727 22:26:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:41.727 22:26:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:41.727 22:26:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.727 22:26:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.727 22:26:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.727 22:26:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.727 22:26:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.727 22:26:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.727 22:26:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.727 22:26:42 -- accel/accel.sh@42 -- # jq -r . 00:06:41.727 [2024-11-20 22:26:42.226242] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:41.727 [2024-11-20 22:26:42.226340] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70581 ] 00:06:41.727 [2024-11-20 22:26:42.360971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.727 [2024-11-20 22:26:42.424144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.100 22:26:43 -- accel/accel.sh@18 -- # out=' 00:06:43.100 SPDK Configuration: 00:06:43.100 Core mask: 0x1 00:06:43.100 00:06:43.100 Accel Perf Configuration: 00:06:43.100 Workload Type: compare 00:06:43.100 Transfer size: 4096 bytes 00:06:43.100 Vector count 1 00:06:43.100 Module: software 00:06:43.100 Queue depth: 32 00:06:43.100 Allocate depth: 32 00:06:43.100 # threads/core: 1 00:06:43.100 Run time: 1 seconds 00:06:43.100 Verify: Yes 00:06:43.100 00:06:43.100 Running for 1 seconds... 00:06:43.100 00:06:43.100 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:43.100 ------------------------------------------------------------------------------------ 00:06:43.100 0,0 570976/s 2230 MiB/s 0 0 00:06:43.100 ==================================================================================== 00:06:43.100 Total 570976/s 2230 MiB/s 0 0' 00:06:43.100 22:26:43 -- accel/accel.sh@20 -- # IFS=: 00:06:43.100 22:26:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:43.100 22:26:43 -- accel/accel.sh@20 -- # read -r var val 00:06:43.100 22:26:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:43.100 22:26:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.100 22:26:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.100 22:26:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.100 22:26:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.100 22:26:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.100 22:26:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.100 22:26:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.100 22:26:43 -- accel/accel.sh@42 -- # jq -r . 00:06:43.100 [2024-11-20 22:26:43.703107] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:43.100 [2024-11-20 22:26:43.703191] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70603 ] 00:06:43.360 [2024-11-20 22:26:43.837576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.360 [2024-11-20 22:26:43.896223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.360 22:26:43 -- accel/accel.sh@21 -- # val= 00:06:43.360 22:26:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # IFS=: 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # read -r var val 00:06:43.360 22:26:43 -- accel/accel.sh@21 -- # val= 00:06:43.360 22:26:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # IFS=: 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # read -r var val 00:06:43.360 22:26:43 -- accel/accel.sh@21 -- # val=0x1 00:06:43.360 22:26:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # IFS=: 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # read -r var val 00:06:43.360 22:26:43 -- accel/accel.sh@21 -- # val= 00:06:43.360 22:26:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # IFS=: 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # read -r var val 00:06:43.360 22:26:43 -- accel/accel.sh@21 -- # val= 00:06:43.360 22:26:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # IFS=: 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # read -r var val 00:06:43.360 22:26:43 -- accel/accel.sh@21 -- # val=compare 00:06:43.360 22:26:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.360 22:26:43 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # IFS=: 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # read -r var val 00:06:43.360 22:26:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:43.360 22:26:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # IFS=: 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # read -r var val 00:06:43.360 22:26:43 -- accel/accel.sh@21 -- # val= 00:06:43.360 22:26:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # IFS=: 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # read -r var val 00:06:43.360 22:26:43 -- accel/accel.sh@21 -- # val=software 00:06:43.360 22:26:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.360 22:26:43 -- accel/accel.sh@23 -- # accel_module=software 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # IFS=: 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # read -r var val 00:06:43.360 22:26:43 -- accel/accel.sh@21 -- # val=32 00:06:43.360 22:26:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # IFS=: 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # read -r var val 00:06:43.360 22:26:43 -- accel/accel.sh@21 -- # val=32 00:06:43.360 22:26:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # IFS=: 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # read -r var val 00:06:43.360 22:26:43 -- accel/accel.sh@21 -- # val=1 00:06:43.360 22:26:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # IFS=: 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # read -r var val 00:06:43.360 22:26:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:43.360 22:26:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # IFS=: 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # read -r var val 00:06:43.360 22:26:43 -- accel/accel.sh@21 -- # val=Yes 00:06:43.360 22:26:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # IFS=: 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # read -r var val 00:06:43.360 22:26:43 -- accel/accel.sh@21 -- # val= 00:06:43.360 22:26:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # IFS=: 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # read -r var val 00:06:43.360 22:26:43 -- accel/accel.sh@21 -- # val= 00:06:43.360 22:26:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # IFS=: 00:06:43.360 22:26:43 -- accel/accel.sh@20 -- # read -r var val 00:06:44.737 22:26:45 -- accel/accel.sh@21 -- # val= 00:06:44.737 22:26:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.737 22:26:45 -- accel/accel.sh@20 -- # IFS=: 00:06:44.737 22:26:45 -- accel/accel.sh@20 -- # read -r var val 00:06:44.737 22:26:45 -- accel/accel.sh@21 -- # val= 00:06:44.737 22:26:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.737 22:26:45 -- accel/accel.sh@20 -- # IFS=: 00:06:44.737 22:26:45 -- accel/accel.sh@20 -- # read -r var val 00:06:44.737 22:26:45 -- accel/accel.sh@21 -- # val= 00:06:44.737 22:26:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.737 22:26:45 -- accel/accel.sh@20 -- # IFS=: 00:06:44.737 22:26:45 -- accel/accel.sh@20 -- # read -r var val 00:06:44.737 22:26:45 -- accel/accel.sh@21 -- # val= 00:06:44.737 22:26:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.737 22:26:45 -- accel/accel.sh@20 -- # IFS=: 00:06:44.737 22:26:45 -- accel/accel.sh@20 -- # read -r var val 00:06:44.737 22:26:45 -- accel/accel.sh@21 -- # val= 00:06:44.737 22:26:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.737 22:26:45 -- accel/accel.sh@20 -- # IFS=: 00:06:44.737 22:26:45 -- accel/accel.sh@20 -- # read -r var val 00:06:44.737 22:26:45 -- accel/accel.sh@21 -- # val= 00:06:44.737 22:26:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.737 22:26:45 -- accel/accel.sh@20 -- # IFS=: 00:06:44.737 22:26:45 -- accel/accel.sh@20 -- # read -r var val 00:06:44.737 22:26:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:44.737 22:26:45 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:44.737 22:26:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.737 00:06:44.737 real 0m2.982s 00:06:44.737 user 0m2.513s 00:06:44.737 sys 0m0.266s 00:06:44.737 22:26:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.737 ************************************ 00:06:44.737 END TEST accel_compare 00:06:44.737 ************************************ 00:06:44.737 22:26:45 -- common/autotest_common.sh@10 -- # set +x 00:06:44.737 22:26:45 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:44.737 22:26:45 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:44.737 22:26:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.737 22:26:45 -- common/autotest_common.sh@10 -- # set +x 00:06:44.737 ************************************ 00:06:44.737 START TEST accel_xor 00:06:44.737 ************************************ 00:06:44.737 22:26:45 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:44.737 22:26:45 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.737 22:26:45 -- accel/accel.sh@17 -- # local accel_module 00:06:44.737 22:26:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:44.737 22:26:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:44.737 22:26:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.737 22:26:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.737 22:26:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.737 22:26:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.737 22:26:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.737 22:26:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.737 22:26:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.737 22:26:45 -- accel/accel.sh@42 -- # jq -r . 00:06:44.737 [2024-11-20 22:26:45.262132] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:44.737 [2024-11-20 22:26:45.262224] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70632 ] 00:06:44.737 [2024-11-20 22:26:45.390820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.737 [2024-11-20 22:26:45.457902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.115 22:26:46 -- accel/accel.sh@18 -- # out=' 00:06:46.115 SPDK Configuration: 00:06:46.115 Core mask: 0x1 00:06:46.115 00:06:46.115 Accel Perf Configuration: 00:06:46.115 Workload Type: xor 00:06:46.115 Source buffers: 2 00:06:46.115 Transfer size: 4096 bytes 00:06:46.115 Vector count 1 00:06:46.115 Module: software 00:06:46.115 Queue depth: 32 00:06:46.115 Allocate depth: 32 00:06:46.115 # threads/core: 1 00:06:46.115 Run time: 1 seconds 00:06:46.115 Verify: Yes 00:06:46.115 00:06:46.115 Running for 1 seconds... 00:06:46.115 00:06:46.115 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:46.115 ------------------------------------------------------------------------------------ 00:06:46.115 0,0 263584/s 1029 MiB/s 0 0 00:06:46.115 ==================================================================================== 00:06:46.115 Total 263584/s 1029 MiB/s 0 0' 00:06:46.115 22:26:46 -- accel/accel.sh@20 -- # IFS=: 00:06:46.115 22:26:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:46.115 22:26:46 -- accel/accel.sh@20 -- # read -r var val 00:06:46.116 22:26:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:46.116 22:26:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.116 22:26:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.116 22:26:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.116 22:26:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.116 22:26:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.116 22:26:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.116 22:26:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.116 22:26:46 -- accel/accel.sh@42 -- # jq -r . 00:06:46.116 [2024-11-20 22:26:46.741019] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:46.116 [2024-11-20 22:26:46.741106] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70657 ] 00:06:46.375 [2024-11-20 22:26:46.875131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.375 [2024-11-20 22:26:46.936806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.375 22:26:47 -- accel/accel.sh@21 -- # val= 00:06:46.375 22:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # IFS=: 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # read -r var val 00:06:46.375 22:26:47 -- accel/accel.sh@21 -- # val= 00:06:46.375 22:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # IFS=: 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # read -r var val 00:06:46.375 22:26:47 -- accel/accel.sh@21 -- # val=0x1 00:06:46.375 22:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # IFS=: 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # read -r var val 00:06:46.375 22:26:47 -- accel/accel.sh@21 -- # val= 00:06:46.375 22:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # IFS=: 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # read -r var val 00:06:46.375 22:26:47 -- accel/accel.sh@21 -- # val= 00:06:46.375 22:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # IFS=: 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # read -r var val 00:06:46.375 22:26:47 -- accel/accel.sh@21 -- # val=xor 00:06:46.375 22:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.375 22:26:47 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # IFS=: 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # read -r var val 00:06:46.375 22:26:47 -- accel/accel.sh@21 -- # val=2 00:06:46.375 22:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # IFS=: 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # read -r var val 00:06:46.375 22:26:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:46.375 22:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # IFS=: 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # read -r var val 00:06:46.375 22:26:47 -- accel/accel.sh@21 -- # val= 00:06:46.375 22:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # IFS=: 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # read -r var val 00:06:46.375 22:26:47 -- accel/accel.sh@21 -- # val=software 00:06:46.375 22:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.375 22:26:47 -- accel/accel.sh@23 -- # accel_module=software 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # IFS=: 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # read -r var val 00:06:46.375 22:26:47 -- accel/accel.sh@21 -- # val=32 00:06:46.375 22:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # IFS=: 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # read -r var val 00:06:46.375 22:26:47 -- accel/accel.sh@21 -- # val=32 00:06:46.375 22:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # IFS=: 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # read -r var val 00:06:46.375 22:26:47 -- accel/accel.sh@21 -- # val=1 00:06:46.375 22:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # IFS=: 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # read -r var val 00:06:46.375 22:26:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:46.375 22:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # IFS=: 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # read -r var val 00:06:46.375 22:26:47 -- accel/accel.sh@21 -- # val=Yes 00:06:46.375 22:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # IFS=: 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # read -r var val 00:06:46.375 22:26:47 -- accel/accel.sh@21 -- # val= 00:06:46.375 22:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # IFS=: 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # read -r var val 00:06:46.375 22:26:47 -- accel/accel.sh@21 -- # val= 00:06:46.375 22:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # IFS=: 00:06:46.375 22:26:47 -- accel/accel.sh@20 -- # read -r var val 00:06:47.752 22:26:48 -- accel/accel.sh@21 -- # val= 00:06:47.752 22:26:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.752 22:26:48 -- accel/accel.sh@20 -- # IFS=: 00:06:47.752 22:26:48 -- accel/accel.sh@20 -- # read -r var val 00:06:47.752 22:26:48 -- accel/accel.sh@21 -- # val= 00:06:47.752 22:26:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.752 22:26:48 -- accel/accel.sh@20 -- # IFS=: 00:06:47.752 22:26:48 -- accel/accel.sh@20 -- # read -r var val 00:06:47.752 22:26:48 -- accel/accel.sh@21 -- # val= 00:06:47.752 22:26:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.752 22:26:48 -- accel/accel.sh@20 -- # IFS=: 00:06:47.752 22:26:48 -- accel/accel.sh@20 -- # read -r var val 00:06:47.752 22:26:48 -- accel/accel.sh@21 -- # val= 00:06:47.752 22:26:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.752 22:26:48 -- accel/accel.sh@20 -- # IFS=: 00:06:47.752 22:26:48 -- accel/accel.sh@20 -- # read -r var val 00:06:47.752 22:26:48 -- accel/accel.sh@21 -- # val= 00:06:47.752 22:26:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.752 22:26:48 -- accel/accel.sh@20 -- # IFS=: 00:06:47.752 22:26:48 -- accel/accel.sh@20 -- # read -r var val 00:06:47.752 22:26:48 -- accel/accel.sh@21 -- # val= 00:06:47.752 22:26:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.752 22:26:48 -- accel/accel.sh@20 -- # IFS=: 00:06:47.752 22:26:48 -- accel/accel.sh@20 -- # read -r var val 00:06:47.752 22:26:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:47.752 22:26:48 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:47.752 22:26:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.752 00:06:47.752 real 0m2.992s 00:06:47.752 user 0m2.524s 00:06:47.752 sys 0m0.264s 00:06:47.752 22:26:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.752 22:26:48 -- common/autotest_common.sh@10 -- # set +x 00:06:47.752 ************************************ 00:06:47.752 END TEST accel_xor 00:06:47.752 ************************************ 00:06:47.752 22:26:48 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:47.752 22:26:48 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:47.752 22:26:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.752 22:26:48 -- common/autotest_common.sh@10 -- # set +x 00:06:47.752 ************************************ 00:06:47.752 START TEST accel_xor 00:06:47.752 ************************************ 00:06:47.752 22:26:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:47.752 22:26:48 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.752 22:26:48 -- accel/accel.sh@17 -- # local accel_module 00:06:47.752 22:26:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:47.752 22:26:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:47.752 22:26:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.752 22:26:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.752 22:26:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.752 22:26:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.752 22:26:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.752 22:26:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.752 22:26:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.752 22:26:48 -- accel/accel.sh@42 -- # jq -r . 00:06:47.752 [2024-11-20 22:26:48.304071] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:47.752 [2024-11-20 22:26:48.304158] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70686 ] 00:06:47.752 [2024-11-20 22:26:48.431236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.011 [2024-11-20 22:26:48.491588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.387 22:26:49 -- accel/accel.sh@18 -- # out=' 00:06:49.387 SPDK Configuration: 00:06:49.387 Core mask: 0x1 00:06:49.387 00:06:49.387 Accel Perf Configuration: 00:06:49.387 Workload Type: xor 00:06:49.387 Source buffers: 3 00:06:49.387 Transfer size: 4096 bytes 00:06:49.387 Vector count 1 00:06:49.387 Module: software 00:06:49.387 Queue depth: 32 00:06:49.387 Allocate depth: 32 00:06:49.387 # threads/core: 1 00:06:49.387 Run time: 1 seconds 00:06:49.387 Verify: Yes 00:06:49.387 00:06:49.387 Running for 1 seconds... 00:06:49.387 00:06:49.387 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:49.387 ------------------------------------------------------------------------------------ 00:06:49.387 0,0 252768/s 987 MiB/s 0 0 00:06:49.387 ==================================================================================== 00:06:49.387 Total 252768/s 987 MiB/s 0 0' 00:06:49.387 22:26:49 -- accel/accel.sh@20 -- # IFS=: 00:06:49.387 22:26:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:49.387 22:26:49 -- accel/accel.sh@20 -- # read -r var val 00:06:49.387 22:26:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:49.387 22:26:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.387 22:26:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.387 22:26:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.387 22:26:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.387 22:26:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.387 22:26:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.387 22:26:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.387 22:26:49 -- accel/accel.sh@42 -- # jq -r . 00:06:49.387 [2024-11-20 22:26:49.807696] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:49.387 [2024-11-20 22:26:49.807806] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70711 ] 00:06:49.387 [2024-11-20 22:26:49.950918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.387 [2024-11-20 22:26:50.019535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.387 22:26:50 -- accel/accel.sh@21 -- # val= 00:06:49.387 22:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.387 22:26:50 -- accel/accel.sh@20 -- # IFS=: 00:06:49.387 22:26:50 -- accel/accel.sh@20 -- # read -r var val 00:06:49.387 22:26:50 -- accel/accel.sh@21 -- # val= 00:06:49.387 22:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.387 22:26:50 -- accel/accel.sh@20 -- # IFS=: 00:06:49.387 22:26:50 -- accel/accel.sh@20 -- # read -r var val 00:06:49.387 22:26:50 -- accel/accel.sh@21 -- # val=0x1 00:06:49.387 22:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.387 22:26:50 -- accel/accel.sh@20 -- # IFS=: 00:06:49.387 22:26:50 -- accel/accel.sh@20 -- # read -r var val 00:06:49.387 22:26:50 -- accel/accel.sh@21 -- # val= 00:06:49.387 22:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.387 22:26:50 -- accel/accel.sh@20 -- # IFS=: 00:06:49.387 22:26:50 -- accel/accel.sh@20 -- # read -r var val 00:06:49.387 22:26:50 -- accel/accel.sh@21 -- # val= 00:06:49.387 22:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.387 22:26:50 -- accel/accel.sh@20 -- # IFS=: 00:06:49.387 22:26:50 -- accel/accel.sh@20 -- # read -r var val 00:06:49.387 22:26:50 -- accel/accel.sh@21 -- # val=xor 00:06:49.387 22:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.387 22:26:50 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:49.387 22:26:50 -- accel/accel.sh@20 -- # IFS=: 00:06:49.387 22:26:50 -- accel/accel.sh@20 -- # read -r var val 00:06:49.387 22:26:50 -- accel/accel.sh@21 -- # val=3 00:06:49.387 22:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.387 22:26:50 -- accel/accel.sh@20 -- # IFS=: 00:06:49.387 22:26:50 -- accel/accel.sh@20 -- # read -r var val 00:06:49.387 22:26:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:49.387 22:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.387 22:26:50 -- accel/accel.sh@20 -- # IFS=: 00:06:49.387 22:26:50 -- accel/accel.sh@20 -- # read -r var val 00:06:49.387 22:26:50 -- accel/accel.sh@21 -- # val= 00:06:49.387 22:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.387 22:26:50 -- accel/accel.sh@20 -- # IFS=: 00:06:49.387 22:26:50 -- accel/accel.sh@20 -- # read -r var val 00:06:49.388 22:26:50 -- accel/accel.sh@21 -- # val=software 00:06:49.388 22:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.388 22:26:50 -- accel/accel.sh@23 -- # accel_module=software 00:06:49.388 22:26:50 -- accel/accel.sh@20 -- # IFS=: 00:06:49.388 22:26:50 -- accel/accel.sh@20 -- # read -r var val 00:06:49.388 22:26:50 -- accel/accel.sh@21 -- # val=32 00:06:49.388 22:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.388 22:26:50 -- accel/accel.sh@20 -- # IFS=: 00:06:49.388 22:26:50 -- accel/accel.sh@20 -- # read -r var val 00:06:49.388 22:26:50 -- accel/accel.sh@21 -- # val=32 00:06:49.388 22:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.388 22:26:50 -- accel/accel.sh@20 -- # IFS=: 00:06:49.388 22:26:50 -- accel/accel.sh@20 -- # read -r var val 00:06:49.388 22:26:50 -- accel/accel.sh@21 -- # val=1 00:06:49.388 22:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.388 22:26:50 -- accel/accel.sh@20 -- # IFS=: 00:06:49.388 22:26:50 -- accel/accel.sh@20 -- # read -r var val 00:06:49.388 22:26:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:49.388 22:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.388 22:26:50 -- accel/accel.sh@20 -- # IFS=: 00:06:49.388 22:26:50 -- accel/accel.sh@20 -- # read -r var val 00:06:49.388 22:26:50 -- accel/accel.sh@21 -- # val=Yes 00:06:49.388 22:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.388 22:26:50 -- accel/accel.sh@20 -- # IFS=: 00:06:49.388 22:26:50 -- accel/accel.sh@20 -- # read -r var val 00:06:49.388 22:26:50 -- accel/accel.sh@21 -- # val= 00:06:49.388 22:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.388 22:26:50 -- accel/accel.sh@20 -- # IFS=: 00:06:49.388 22:26:50 -- accel/accel.sh@20 -- # read -r var val 00:06:49.388 22:26:50 -- accel/accel.sh@21 -- # val= 00:06:49.388 22:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.388 22:26:50 -- accel/accel.sh@20 -- # IFS=: 00:06:49.388 22:26:50 -- accel/accel.sh@20 -- # read -r var val 00:06:50.765 22:26:51 -- accel/accel.sh@21 -- # val= 00:06:50.765 22:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.765 22:26:51 -- accel/accel.sh@20 -- # IFS=: 00:06:50.765 22:26:51 -- accel/accel.sh@20 -- # read -r var val 00:06:50.765 22:26:51 -- accel/accel.sh@21 -- # val= 00:06:50.765 22:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.765 22:26:51 -- accel/accel.sh@20 -- # IFS=: 00:06:50.765 22:26:51 -- accel/accel.sh@20 -- # read -r var val 00:06:50.765 22:26:51 -- accel/accel.sh@21 -- # val= 00:06:50.765 22:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.765 22:26:51 -- accel/accel.sh@20 -- # IFS=: 00:06:50.765 22:26:51 -- accel/accel.sh@20 -- # read -r var val 00:06:50.765 22:26:51 -- accel/accel.sh@21 -- # val= 00:06:50.765 22:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.765 22:26:51 -- accel/accel.sh@20 -- # IFS=: 00:06:50.765 22:26:51 -- accel/accel.sh@20 -- # read -r var val 00:06:50.765 22:26:51 -- accel/accel.sh@21 -- # val= 00:06:50.765 22:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.765 22:26:51 -- accel/accel.sh@20 -- # IFS=: 00:06:50.765 22:26:51 -- accel/accel.sh@20 -- # read -r var val 00:06:50.765 22:26:51 -- accel/accel.sh@21 -- # val= 00:06:50.765 22:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.765 22:26:51 -- accel/accel.sh@20 -- # IFS=: 00:06:50.765 22:26:51 -- accel/accel.sh@20 -- # read -r var val 00:06:50.765 22:26:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:50.765 22:26:51 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:50.765 22:26:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.765 00:06:50.765 real 0m2.999s 00:06:50.765 user 0m2.513s 00:06:50.765 sys 0m0.283s 00:06:50.765 22:26:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.765 22:26:51 -- common/autotest_common.sh@10 -- # set +x 00:06:50.765 ************************************ 00:06:50.765 END TEST accel_xor 00:06:50.765 ************************************ 00:06:50.765 22:26:51 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:50.765 22:26:51 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:50.765 22:26:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.765 22:26:51 -- common/autotest_common.sh@10 -- # set +x 00:06:50.765 ************************************ 00:06:50.765 START TEST accel_dif_verify 00:06:50.765 ************************************ 00:06:50.765 22:26:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:50.765 22:26:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.765 22:26:51 -- accel/accel.sh@17 -- # local accel_module 00:06:50.765 22:26:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:50.765 22:26:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:50.765 22:26:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.765 22:26:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.765 22:26:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.765 22:26:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.765 22:26:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.765 22:26:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.765 22:26:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.765 22:26:51 -- accel/accel.sh@42 -- # jq -r . 00:06:50.765 [2024-11-20 22:26:51.350830] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:50.765 [2024-11-20 22:26:51.350926] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70740 ] 00:06:50.765 [2024-11-20 22:26:51.486177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.023 [2024-11-20 22:26:51.549100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.400 22:26:52 -- accel/accel.sh@18 -- # out=' 00:06:52.400 SPDK Configuration: 00:06:52.400 Core mask: 0x1 00:06:52.400 00:06:52.400 Accel Perf Configuration: 00:06:52.400 Workload Type: dif_verify 00:06:52.400 Vector size: 4096 bytes 00:06:52.400 Transfer size: 4096 bytes 00:06:52.400 Block size: 512 bytes 00:06:52.400 Metadata size: 8 bytes 00:06:52.400 Vector count 1 00:06:52.400 Module: software 00:06:52.400 Queue depth: 32 00:06:52.400 Allocate depth: 32 00:06:52.400 # threads/core: 1 00:06:52.400 Run time: 1 seconds 00:06:52.400 Verify: No 00:06:52.400 00:06:52.400 Running for 1 seconds... 00:06:52.400 00:06:52.400 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:52.400 ------------------------------------------------------------------------------------ 00:06:52.400 0,0 127904/s 507 MiB/s 0 0 00:06:52.400 ==================================================================================== 00:06:52.400 Total 127904/s 499 MiB/s 0 0' 00:06:52.400 22:26:52 -- accel/accel.sh@20 -- # IFS=: 00:06:52.400 22:26:52 -- accel/accel.sh@20 -- # read -r var val 00:06:52.400 22:26:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:52.400 22:26:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.400 22:26:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:52.400 22:26:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.400 22:26:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.400 22:26:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.400 22:26:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.400 22:26:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.400 22:26:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.400 22:26:52 -- accel/accel.sh@42 -- # jq -r . 00:06:52.400 [2024-11-20 22:26:52.826761] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:52.400 [2024-11-20 22:26:52.826848] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70765 ] 00:06:52.400 [2024-11-20 22:26:52.961973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.400 [2024-11-20 22:26:53.022950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.400 22:26:53 -- accel/accel.sh@21 -- # val= 00:06:52.400 22:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # IFS=: 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # read -r var val 00:06:52.400 22:26:53 -- accel/accel.sh@21 -- # val= 00:06:52.400 22:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # IFS=: 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # read -r var val 00:06:52.400 22:26:53 -- accel/accel.sh@21 -- # val=0x1 00:06:52.400 22:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # IFS=: 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # read -r var val 00:06:52.400 22:26:53 -- accel/accel.sh@21 -- # val= 00:06:52.400 22:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # IFS=: 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # read -r var val 00:06:52.400 22:26:53 -- accel/accel.sh@21 -- # val= 00:06:52.400 22:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # IFS=: 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # read -r var val 00:06:52.400 22:26:53 -- accel/accel.sh@21 -- # val=dif_verify 00:06:52.400 22:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.400 22:26:53 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # IFS=: 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # read -r var val 00:06:52.400 22:26:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:52.400 22:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # IFS=: 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # read -r var val 00:06:52.400 22:26:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:52.400 22:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # IFS=: 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # read -r var val 00:06:52.400 22:26:53 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:52.400 22:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # IFS=: 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # read -r var val 00:06:52.400 22:26:53 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:52.400 22:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # IFS=: 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # read -r var val 00:06:52.400 22:26:53 -- accel/accel.sh@21 -- # val= 00:06:52.400 22:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # IFS=: 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # read -r var val 00:06:52.400 22:26:53 -- accel/accel.sh@21 -- # val=software 00:06:52.400 22:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.400 22:26:53 -- accel/accel.sh@23 -- # accel_module=software 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # IFS=: 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # read -r var val 00:06:52.400 22:26:53 -- accel/accel.sh@21 -- # val=32 00:06:52.400 22:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # IFS=: 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # read -r var val 00:06:52.400 22:26:53 -- accel/accel.sh@21 -- # val=32 00:06:52.400 22:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # IFS=: 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # read -r var val 00:06:52.400 22:26:53 -- accel/accel.sh@21 -- # val=1 00:06:52.400 22:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # IFS=: 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # read -r var val 00:06:52.400 22:26:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:52.400 22:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # IFS=: 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # read -r var val 00:06:52.400 22:26:53 -- accel/accel.sh@21 -- # val=No 00:06:52.400 22:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # IFS=: 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # read -r var val 00:06:52.400 22:26:53 -- accel/accel.sh@21 -- # val= 00:06:52.400 22:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # IFS=: 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # read -r var val 00:06:52.400 22:26:53 -- accel/accel.sh@21 -- # val= 00:06:52.400 22:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # IFS=: 00:06:52.400 22:26:53 -- accel/accel.sh@20 -- # read -r var val 00:06:53.776 22:26:54 -- accel/accel.sh@21 -- # val= 00:06:53.776 22:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.776 22:26:54 -- accel/accel.sh@20 -- # IFS=: 00:06:53.776 22:26:54 -- accel/accel.sh@20 -- # read -r var val 00:06:53.776 22:26:54 -- accel/accel.sh@21 -- # val= 00:06:53.776 22:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.776 22:26:54 -- accel/accel.sh@20 -- # IFS=: 00:06:53.776 22:26:54 -- accel/accel.sh@20 -- # read -r var val 00:06:53.776 22:26:54 -- accel/accel.sh@21 -- # val= 00:06:53.776 22:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.776 22:26:54 -- accel/accel.sh@20 -- # IFS=: 00:06:53.776 22:26:54 -- accel/accel.sh@20 -- # read -r var val 00:06:53.776 22:26:54 -- accel/accel.sh@21 -- # val= 00:06:53.776 22:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.776 22:26:54 -- accel/accel.sh@20 -- # IFS=: 00:06:53.776 22:26:54 -- accel/accel.sh@20 -- # read -r var val 00:06:53.776 22:26:54 -- accel/accel.sh@21 -- # val= 00:06:53.776 22:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.776 22:26:54 -- accel/accel.sh@20 -- # IFS=: 00:06:53.776 22:26:54 -- accel/accel.sh@20 -- # read -r var val 00:06:53.776 22:26:54 -- accel/accel.sh@21 -- # val= 00:06:53.776 ************************************ 00:06:53.776 END TEST accel_dif_verify 00:06:53.776 ************************************ 00:06:53.776 22:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.776 22:26:54 -- accel/accel.sh@20 -- # IFS=: 00:06:53.776 22:26:54 -- accel/accel.sh@20 -- # read -r var val 00:06:53.776 22:26:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:53.776 22:26:54 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:53.776 22:26:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.776 00:06:53.776 real 0m2.994s 00:06:53.776 user 0m2.525s 00:06:53.776 sys 0m0.260s 00:06:53.776 22:26:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:53.776 22:26:54 -- common/autotest_common.sh@10 -- # set +x 00:06:53.776 22:26:54 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:53.776 22:26:54 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:53.776 22:26:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.776 22:26:54 -- common/autotest_common.sh@10 -- # set +x 00:06:53.776 ************************************ 00:06:53.776 START TEST accel_dif_generate 00:06:53.776 ************************************ 00:06:53.776 22:26:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:53.776 22:26:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.776 22:26:54 -- accel/accel.sh@17 -- # local accel_module 00:06:53.776 22:26:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:53.776 22:26:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:53.776 22:26:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.776 22:26:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.776 22:26:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.776 22:26:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.776 22:26:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.776 22:26:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.776 22:26:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.776 22:26:54 -- accel/accel.sh@42 -- # jq -r . 00:06:53.776 [2024-11-20 22:26:54.395960] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:53.776 [2024-11-20 22:26:54.396189] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70794 ] 00:06:54.035 [2024-11-20 22:26:54.530218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.035 [2024-11-20 22:26:54.596531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.410 22:26:55 -- accel/accel.sh@18 -- # out=' 00:06:55.410 SPDK Configuration: 00:06:55.410 Core mask: 0x1 00:06:55.410 00:06:55.410 Accel Perf Configuration: 00:06:55.410 Workload Type: dif_generate 00:06:55.410 Vector size: 4096 bytes 00:06:55.410 Transfer size: 4096 bytes 00:06:55.410 Block size: 512 bytes 00:06:55.410 Metadata size: 8 bytes 00:06:55.410 Vector count 1 00:06:55.410 Module: software 00:06:55.410 Queue depth: 32 00:06:55.410 Allocate depth: 32 00:06:55.410 # threads/core: 1 00:06:55.410 Run time: 1 seconds 00:06:55.410 Verify: No 00:06:55.410 00:06:55.410 Running for 1 seconds... 00:06:55.410 00:06:55.410 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:55.410 ------------------------------------------------------------------------------------ 00:06:55.410 0,0 154016/s 611 MiB/s 0 0 00:06:55.410 ==================================================================================== 00:06:55.410 Total 154016/s 601 MiB/s 0 0' 00:06:55.410 22:26:55 -- accel/accel.sh@20 -- # IFS=: 00:06:55.410 22:26:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:55.410 22:26:55 -- accel/accel.sh@20 -- # read -r var val 00:06:55.410 22:26:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.410 22:26:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:55.410 22:26:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.410 22:26:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.410 22:26:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.410 22:26:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.410 22:26:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.410 22:26:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.410 22:26:55 -- accel/accel.sh@42 -- # jq -r . 00:06:55.410 [2024-11-20 22:26:55.908938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:55.410 [2024-11-20 22:26:55.909172] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70819 ] 00:06:55.410 [2024-11-20 22:26:56.039906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.410 [2024-11-20 22:26:56.097878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.669 22:26:56 -- accel/accel.sh@21 -- # val= 00:06:55.669 22:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.669 22:26:56 -- accel/accel.sh@21 -- # val= 00:06:55.669 22:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.669 22:26:56 -- accel/accel.sh@21 -- # val=0x1 00:06:55.669 22:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.669 22:26:56 -- accel/accel.sh@21 -- # val= 00:06:55.669 22:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.669 22:26:56 -- accel/accel.sh@21 -- # val= 00:06:55.669 22:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.669 22:26:56 -- accel/accel.sh@21 -- # val=dif_generate 00:06:55.669 22:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.669 22:26:56 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.669 22:26:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.669 22:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.669 22:26:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.669 22:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.669 22:26:56 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:55.669 22:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.669 22:26:56 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:55.669 22:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.669 22:26:56 -- accel/accel.sh@21 -- # val= 00:06:55.669 22:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.669 22:26:56 -- accel/accel.sh@21 -- # val=software 00:06:55.669 22:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.669 22:26:56 -- accel/accel.sh@23 -- # accel_module=software 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.669 22:26:56 -- accel/accel.sh@21 -- # val=32 00:06:55.669 22:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.669 22:26:56 -- accel/accel.sh@21 -- # val=32 00:06:55.669 22:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.669 22:26:56 -- accel/accel.sh@21 -- # val=1 00:06:55.669 22:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.669 22:26:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:55.669 22:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.669 22:26:56 -- accel/accel.sh@21 -- # val=No 00:06:55.669 22:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.669 22:26:56 -- accel/accel.sh@21 -- # val= 00:06:55.669 22:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.669 22:26:56 -- accel/accel.sh@21 -- # val= 00:06:55.669 22:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.669 22:26:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.670 22:26:56 -- accel/accel.sh@20 -- # read -r var val 00:06:57.046 22:26:57 -- accel/accel.sh@21 -- # val= 00:06:57.046 22:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.046 22:26:57 -- accel/accel.sh@20 -- # IFS=: 00:06:57.046 22:26:57 -- accel/accel.sh@20 -- # read -r var val 00:06:57.046 22:26:57 -- accel/accel.sh@21 -- # val= 00:06:57.046 22:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.046 22:26:57 -- accel/accel.sh@20 -- # IFS=: 00:06:57.046 22:26:57 -- accel/accel.sh@20 -- # read -r var val 00:06:57.046 22:26:57 -- accel/accel.sh@21 -- # val= 00:06:57.046 22:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.046 22:26:57 -- accel/accel.sh@20 -- # IFS=: 00:06:57.046 22:26:57 -- accel/accel.sh@20 -- # read -r var val 00:06:57.046 22:26:57 -- accel/accel.sh@21 -- # val= 00:06:57.046 22:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.046 22:26:57 -- accel/accel.sh@20 -- # IFS=: 00:06:57.046 22:26:57 -- accel/accel.sh@20 -- # read -r var val 00:06:57.046 22:26:57 -- accel/accel.sh@21 -- # val= 00:06:57.046 22:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.046 22:26:57 -- accel/accel.sh@20 -- # IFS=: 00:06:57.046 22:26:57 -- accel/accel.sh@20 -- # read -r var val 00:06:57.046 22:26:57 -- accel/accel.sh@21 -- # val= 00:06:57.046 22:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.046 22:26:57 -- accel/accel.sh@20 -- # IFS=: 00:06:57.046 22:26:57 -- accel/accel.sh@20 -- # read -r var val 00:06:57.046 22:26:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:57.046 ************************************ 00:06:57.046 END TEST accel_dif_generate 00:06:57.046 ************************************ 00:06:57.046 22:26:57 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:57.046 22:26:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.046 00:06:57.046 real 0m3.017s 00:06:57.046 user 0m2.545s 00:06:57.046 sys 0m0.265s 00:06:57.046 22:26:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:57.046 22:26:57 -- common/autotest_common.sh@10 -- # set +x 00:06:57.046 22:26:57 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:57.046 22:26:57 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:57.046 22:26:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.046 22:26:57 -- common/autotest_common.sh@10 -- # set +x 00:06:57.046 ************************************ 00:06:57.046 START TEST accel_dif_generate_copy 00:06:57.046 ************************************ 00:06:57.046 22:26:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:57.046 22:26:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:57.046 22:26:57 -- accel/accel.sh@17 -- # local accel_module 00:06:57.046 22:26:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:57.046 22:26:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:57.046 22:26:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.046 22:26:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.046 22:26:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.046 22:26:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.046 22:26:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.046 22:26:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.046 22:26:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.046 22:26:57 -- accel/accel.sh@42 -- # jq -r . 00:06:57.046 [2024-11-20 22:26:57.475334] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:57.047 [2024-11-20 22:26:57.475448] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70848 ] 00:06:57.047 [2024-11-20 22:26:57.618362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.047 [2024-11-20 22:26:57.682481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.423 22:26:58 -- accel/accel.sh@18 -- # out=' 00:06:58.423 SPDK Configuration: 00:06:58.423 Core mask: 0x1 00:06:58.423 00:06:58.423 Accel Perf Configuration: 00:06:58.423 Workload Type: dif_generate_copy 00:06:58.423 Vector size: 4096 bytes 00:06:58.423 Transfer size: 4096 bytes 00:06:58.423 Vector count 1 00:06:58.423 Module: software 00:06:58.423 Queue depth: 32 00:06:58.423 Allocate depth: 32 00:06:58.423 # threads/core: 1 00:06:58.423 Run time: 1 seconds 00:06:58.423 Verify: No 00:06:58.423 00:06:58.423 Running for 1 seconds... 00:06:58.423 00:06:58.423 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.423 ------------------------------------------------------------------------------------ 00:06:58.423 0,0 119424/s 473 MiB/s 0 0 00:06:58.423 ==================================================================================== 00:06:58.423 Total 119424/s 466 MiB/s 0 0' 00:06:58.423 22:26:58 -- accel/accel.sh@20 -- # IFS=: 00:06:58.423 22:26:58 -- accel/accel.sh@20 -- # read -r var val 00:06:58.423 22:26:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:58.423 22:26:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:58.423 22:26:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.423 22:26:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.423 22:26:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.423 22:26:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.423 22:26:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.423 22:26:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.423 22:26:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.423 22:26:58 -- accel/accel.sh@42 -- # jq -r . 00:06:58.423 [2024-11-20 22:26:58.959092] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:58.423 [2024-11-20 22:26:58.959174] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70873 ] 00:06:58.423 [2024-11-20 22:26:59.094034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.683 [2024-11-20 22:26:59.155115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.683 22:26:59 -- accel/accel.sh@21 -- # val= 00:06:58.683 22:26:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.683 22:26:59 -- accel/accel.sh@20 -- # IFS=: 00:06:58.683 22:26:59 -- accel/accel.sh@20 -- # read -r var val 00:06:58.683 22:26:59 -- accel/accel.sh@21 -- # val= 00:06:58.683 22:26:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.683 22:26:59 -- accel/accel.sh@20 -- # IFS=: 00:06:58.683 22:26:59 -- accel/accel.sh@20 -- # read -r var val 00:06:58.683 22:26:59 -- accel/accel.sh@21 -- # val=0x1 00:06:58.683 22:26:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.683 22:26:59 -- accel/accel.sh@20 -- # IFS=: 00:06:58.683 22:26:59 -- accel/accel.sh@20 -- # read -r var val 00:06:58.683 22:26:59 -- accel/accel.sh@21 -- # val= 00:06:58.683 22:26:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.683 22:26:59 -- accel/accel.sh@20 -- # IFS=: 00:06:58.683 22:26:59 -- accel/accel.sh@20 -- # read -r var val 00:06:58.683 22:26:59 -- accel/accel.sh@21 -- # val= 00:06:58.683 22:26:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.683 22:26:59 -- accel/accel.sh@20 -- # IFS=: 00:06:58.683 22:26:59 -- accel/accel.sh@20 -- # read -r var val 00:06:58.683 22:26:59 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:58.683 22:26:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.683 22:26:59 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:58.683 22:26:59 -- accel/accel.sh@20 -- # IFS=: 00:06:58.684 22:26:59 -- accel/accel.sh@20 -- # read -r var val 00:06:58.684 22:26:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.684 22:26:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.684 22:26:59 -- accel/accel.sh@20 -- # IFS=: 00:06:58.684 22:26:59 -- accel/accel.sh@20 -- # read -r var val 00:06:58.684 22:26:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.684 22:26:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.684 22:26:59 -- accel/accel.sh@20 -- # IFS=: 00:06:58.684 22:26:59 -- accel/accel.sh@20 -- # read -r var val 00:06:58.684 22:26:59 -- accel/accel.sh@21 -- # val= 00:06:58.684 22:26:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.684 22:26:59 -- accel/accel.sh@20 -- # IFS=: 00:06:58.684 22:26:59 -- accel/accel.sh@20 -- # read -r var val 00:06:58.684 22:26:59 -- accel/accel.sh@21 -- # val=software 00:06:58.684 22:26:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.684 22:26:59 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.684 22:26:59 -- accel/accel.sh@20 -- # IFS=: 00:06:58.684 22:26:59 -- accel/accel.sh@20 -- # read -r var val 00:06:58.684 22:26:59 -- accel/accel.sh@21 -- # val=32 00:06:58.684 22:26:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.684 22:26:59 -- accel/accel.sh@20 -- # IFS=: 00:06:58.684 22:26:59 -- accel/accel.sh@20 -- # read -r var val 00:06:58.684 22:26:59 -- accel/accel.sh@21 -- # val=32 00:06:58.684 22:26:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.684 22:26:59 -- accel/accel.sh@20 -- # IFS=: 00:06:58.684 22:26:59 -- accel/accel.sh@20 -- # read -r var val 00:06:58.684 22:26:59 -- accel/accel.sh@21 -- # val=1 00:06:58.684 22:26:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.684 22:26:59 -- accel/accel.sh@20 -- # IFS=: 00:06:58.684 22:26:59 -- accel/accel.sh@20 -- # read -r var val 00:06:58.684 22:26:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.684 22:26:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.684 22:26:59 -- accel/accel.sh@20 -- # IFS=: 00:06:58.684 22:26:59 -- accel/accel.sh@20 -- # read -r var val 00:06:58.684 22:26:59 -- accel/accel.sh@21 -- # val=No 00:06:58.684 22:26:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.684 22:26:59 -- accel/accel.sh@20 -- # IFS=: 00:06:58.684 22:26:59 -- accel/accel.sh@20 -- # read -r var val 00:06:58.684 22:26:59 -- accel/accel.sh@21 -- # val= 00:06:58.684 22:26:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.684 22:26:59 -- accel/accel.sh@20 -- # IFS=: 00:06:58.684 22:26:59 -- accel/accel.sh@20 -- # read -r var val 00:06:58.684 22:26:59 -- accel/accel.sh@21 -- # val= 00:06:58.684 22:26:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.684 22:26:59 -- accel/accel.sh@20 -- # IFS=: 00:06:58.684 22:26:59 -- accel/accel.sh@20 -- # read -r var val 00:07:00.063 22:27:00 -- accel/accel.sh@21 -- # val= 00:07:00.063 22:27:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.063 22:27:00 -- accel/accel.sh@20 -- # IFS=: 00:07:00.063 22:27:00 -- accel/accel.sh@20 -- # read -r var val 00:07:00.063 22:27:00 -- accel/accel.sh@21 -- # val= 00:07:00.063 22:27:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.063 22:27:00 -- accel/accel.sh@20 -- # IFS=: 00:07:00.063 22:27:00 -- accel/accel.sh@20 -- # read -r var val 00:07:00.063 22:27:00 -- accel/accel.sh@21 -- # val= 00:07:00.063 22:27:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.063 22:27:00 -- accel/accel.sh@20 -- # IFS=: 00:07:00.063 22:27:00 -- accel/accel.sh@20 -- # read -r var val 00:07:00.063 22:27:00 -- accel/accel.sh@21 -- # val= 00:07:00.063 22:27:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.063 22:27:00 -- accel/accel.sh@20 -- # IFS=: 00:07:00.063 22:27:00 -- accel/accel.sh@20 -- # read -r var val 00:07:00.063 22:27:00 -- accel/accel.sh@21 -- # val= 00:07:00.063 22:27:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.063 22:27:00 -- accel/accel.sh@20 -- # IFS=: 00:07:00.063 22:27:00 -- accel/accel.sh@20 -- # read -r var val 00:07:00.063 22:27:00 -- accel/accel.sh@21 -- # val= 00:07:00.063 22:27:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.063 22:27:00 -- accel/accel.sh@20 -- # IFS=: 00:07:00.063 22:27:00 -- accel/accel.sh@20 -- # read -r var val 00:07:00.063 22:27:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.063 ************************************ 00:07:00.063 END TEST accel_dif_generate_copy 00:07:00.063 ************************************ 00:07:00.063 22:27:00 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:00.063 22:27:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.063 00:07:00.063 real 0m2.977s 00:07:00.063 user 0m2.488s 00:07:00.063 sys 0m0.281s 00:07:00.063 22:27:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.063 22:27:00 -- common/autotest_common.sh@10 -- # set +x 00:07:00.063 22:27:00 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:00.063 22:27:00 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:00.063 22:27:00 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:00.063 22:27:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.063 22:27:00 -- common/autotest_common.sh@10 -- # set +x 00:07:00.063 ************************************ 00:07:00.063 START TEST accel_comp 00:07:00.063 ************************************ 00:07:00.063 22:27:00 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:00.063 22:27:00 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.063 22:27:00 -- accel/accel.sh@17 -- # local accel_module 00:07:00.063 22:27:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:00.063 22:27:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:00.063 22:27:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.063 22:27:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.063 22:27:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.063 22:27:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.063 22:27:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.063 22:27:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.063 22:27:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.063 22:27:00 -- accel/accel.sh@42 -- # jq -r . 00:07:00.063 [2024-11-20 22:27:00.497584] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:00.063 [2024-11-20 22:27:00.497683] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70902 ] 00:07:00.063 [2024-11-20 22:27:00.624838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.063 [2024-11-20 22:27:00.701977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.440 22:27:01 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:01.440 00:07:01.440 SPDK Configuration: 00:07:01.440 Core mask: 0x1 00:07:01.440 00:07:01.440 Accel Perf Configuration: 00:07:01.440 Workload Type: compress 00:07:01.440 Transfer size: 4096 bytes 00:07:01.440 Vector count 1 00:07:01.440 Module: software 00:07:01.440 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:01.440 Queue depth: 32 00:07:01.440 Allocate depth: 32 00:07:01.440 # threads/core: 1 00:07:01.440 Run time: 1 seconds 00:07:01.440 Verify: No 00:07:01.440 00:07:01.440 Running for 1 seconds... 00:07:01.440 00:07:01.440 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:01.440 ------------------------------------------------------------------------------------ 00:07:01.440 0,0 59136/s 246 MiB/s 0 0 00:07:01.440 ==================================================================================== 00:07:01.440 Total 59136/s 231 MiB/s 0 0' 00:07:01.440 22:27:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:01.440 22:27:01 -- accel/accel.sh@20 -- # IFS=: 00:07:01.440 22:27:01 -- accel/accel.sh@20 -- # read -r var val 00:07:01.440 22:27:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:01.440 22:27:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.440 22:27:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.440 22:27:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.440 22:27:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.440 22:27:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.440 22:27:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.440 22:27:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.440 22:27:02 -- accel/accel.sh@42 -- # jq -r . 00:07:01.440 [2024-11-20 22:27:02.022662] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:01.440 [2024-11-20 22:27:02.022748] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70922 ] 00:07:01.440 [2024-11-20 22:27:02.159972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.700 [2024-11-20 22:27:02.251709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.700 22:27:02 -- accel/accel.sh@21 -- # val= 00:07:01.700 22:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # IFS=: 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # read -r var val 00:07:01.700 22:27:02 -- accel/accel.sh@21 -- # val= 00:07:01.700 22:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # IFS=: 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # read -r var val 00:07:01.700 22:27:02 -- accel/accel.sh@21 -- # val= 00:07:01.700 22:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # IFS=: 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # read -r var val 00:07:01.700 22:27:02 -- accel/accel.sh@21 -- # val=0x1 00:07:01.700 22:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # IFS=: 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # read -r var val 00:07:01.700 22:27:02 -- accel/accel.sh@21 -- # val= 00:07:01.700 22:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # IFS=: 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # read -r var val 00:07:01.700 22:27:02 -- accel/accel.sh@21 -- # val= 00:07:01.700 22:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # IFS=: 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # read -r var val 00:07:01.700 22:27:02 -- accel/accel.sh@21 -- # val=compress 00:07:01.700 22:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.700 22:27:02 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # IFS=: 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # read -r var val 00:07:01.700 22:27:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:01.700 22:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # IFS=: 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # read -r var val 00:07:01.700 22:27:02 -- accel/accel.sh@21 -- # val= 00:07:01.700 22:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # IFS=: 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # read -r var val 00:07:01.700 22:27:02 -- accel/accel.sh@21 -- # val=software 00:07:01.700 22:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.700 22:27:02 -- accel/accel.sh@23 -- # accel_module=software 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # IFS=: 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # read -r var val 00:07:01.700 22:27:02 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:01.700 22:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # IFS=: 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # read -r var val 00:07:01.700 22:27:02 -- accel/accel.sh@21 -- # val=32 00:07:01.700 22:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # IFS=: 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # read -r var val 00:07:01.700 22:27:02 -- accel/accel.sh@21 -- # val=32 00:07:01.700 22:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # IFS=: 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # read -r var val 00:07:01.700 22:27:02 -- accel/accel.sh@21 -- # val=1 00:07:01.700 22:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # IFS=: 00:07:01.700 22:27:02 -- accel/accel.sh@20 -- # read -r var val 00:07:01.701 22:27:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:01.701 22:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.701 22:27:02 -- accel/accel.sh@20 -- # IFS=: 00:07:01.701 22:27:02 -- accel/accel.sh@20 -- # read -r var val 00:07:01.701 22:27:02 -- accel/accel.sh@21 -- # val=No 00:07:01.701 22:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.701 22:27:02 -- accel/accel.sh@20 -- # IFS=: 00:07:01.701 22:27:02 -- accel/accel.sh@20 -- # read -r var val 00:07:01.701 22:27:02 -- accel/accel.sh@21 -- # val= 00:07:01.701 22:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.701 22:27:02 -- accel/accel.sh@20 -- # IFS=: 00:07:01.701 22:27:02 -- accel/accel.sh@20 -- # read -r var val 00:07:01.701 22:27:02 -- accel/accel.sh@21 -- # val= 00:07:01.701 22:27:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.701 22:27:02 -- accel/accel.sh@20 -- # IFS=: 00:07:01.701 22:27:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.078 22:27:03 -- accel/accel.sh@21 -- # val= 00:07:03.078 22:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.078 22:27:03 -- accel/accel.sh@20 -- # IFS=: 00:07:03.078 22:27:03 -- accel/accel.sh@20 -- # read -r var val 00:07:03.078 22:27:03 -- accel/accel.sh@21 -- # val= 00:07:03.078 22:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.078 22:27:03 -- accel/accel.sh@20 -- # IFS=: 00:07:03.078 22:27:03 -- accel/accel.sh@20 -- # read -r var val 00:07:03.078 22:27:03 -- accel/accel.sh@21 -- # val= 00:07:03.078 22:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.078 22:27:03 -- accel/accel.sh@20 -- # IFS=: 00:07:03.078 22:27:03 -- accel/accel.sh@20 -- # read -r var val 00:07:03.078 22:27:03 -- accel/accel.sh@21 -- # val= 00:07:03.078 22:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.078 22:27:03 -- accel/accel.sh@20 -- # IFS=: 00:07:03.078 22:27:03 -- accel/accel.sh@20 -- # read -r var val 00:07:03.078 22:27:03 -- accel/accel.sh@21 -- # val= 00:07:03.078 22:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.078 22:27:03 -- accel/accel.sh@20 -- # IFS=: 00:07:03.078 22:27:03 -- accel/accel.sh@20 -- # read -r var val 00:07:03.078 22:27:03 -- accel/accel.sh@21 -- # val= 00:07:03.078 22:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.078 22:27:03 -- accel/accel.sh@20 -- # IFS=: 00:07:03.078 22:27:03 -- accel/accel.sh@20 -- # read -r var val 00:07:03.078 22:27:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.078 22:27:03 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:03.078 22:27:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.078 00:07:03.078 real 0m3.060s 00:07:03.078 user 0m2.552s 00:07:03.078 sys 0m0.301s 00:07:03.078 22:27:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.078 22:27:03 -- common/autotest_common.sh@10 -- # set +x 00:07:03.078 ************************************ 00:07:03.078 END TEST accel_comp 00:07:03.078 ************************************ 00:07:03.078 22:27:03 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.078 22:27:03 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:03.078 22:27:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.078 22:27:03 -- common/autotest_common.sh@10 -- # set +x 00:07:03.078 ************************************ 00:07:03.078 START TEST accel_decomp 00:07:03.078 ************************************ 00:07:03.078 22:27:03 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.078 22:27:03 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.079 22:27:03 -- accel/accel.sh@17 -- # local accel_module 00:07:03.079 22:27:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.079 22:27:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.079 22:27:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.079 22:27:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.079 22:27:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.079 22:27:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.079 22:27:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.079 22:27:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.079 22:27:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.079 22:27:03 -- accel/accel.sh@42 -- # jq -r . 00:07:03.079 [2024-11-20 22:27:03.602075] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:03.079 [2024-11-20 22:27:03.602153] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70956 ] 00:07:03.079 [2024-11-20 22:27:03.728849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.079 [2024-11-20 22:27:03.797855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.459 22:27:05 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:04.459 00:07:04.459 SPDK Configuration: 00:07:04.459 Core mask: 0x1 00:07:04.459 00:07:04.459 Accel Perf Configuration: 00:07:04.459 Workload Type: decompress 00:07:04.459 Transfer size: 4096 bytes 00:07:04.459 Vector count 1 00:07:04.459 Module: software 00:07:04.459 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:04.459 Queue depth: 32 00:07:04.459 Allocate depth: 32 00:07:04.459 # threads/core: 1 00:07:04.459 Run time: 1 seconds 00:07:04.459 Verify: Yes 00:07:04.459 00:07:04.459 Running for 1 seconds... 00:07:04.459 00:07:04.459 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.459 ------------------------------------------------------------------------------------ 00:07:04.459 0,0 85408/s 157 MiB/s 0 0 00:07:04.459 ==================================================================================== 00:07:04.459 Total 85408/s 333 MiB/s 0 0' 00:07:04.459 22:27:05 -- accel/accel.sh@20 -- # IFS=: 00:07:04.459 22:27:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:04.459 22:27:05 -- accel/accel.sh@20 -- # read -r var val 00:07:04.459 22:27:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:04.459 22:27:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.459 22:27:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.459 22:27:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.459 22:27:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.459 22:27:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.459 22:27:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.459 22:27:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.459 22:27:05 -- accel/accel.sh@42 -- # jq -r . 00:07:04.459 [2024-11-20 22:27:05.113660] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:04.459 [2024-11-20 22:27:05.113768] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70976 ] 00:07:04.718 [2024-11-20 22:27:05.251364] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.718 [2024-11-20 22:27:05.317923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.718 22:27:05 -- accel/accel.sh@21 -- # val= 00:07:04.718 22:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.718 22:27:05 -- accel/accel.sh@20 -- # IFS=: 00:07:04.718 22:27:05 -- accel/accel.sh@20 -- # read -r var val 00:07:04.718 22:27:05 -- accel/accel.sh@21 -- # val= 00:07:04.718 22:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.718 22:27:05 -- accel/accel.sh@20 -- # IFS=: 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # read -r var val 00:07:04.719 22:27:05 -- accel/accel.sh@21 -- # val= 00:07:04.719 22:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # IFS=: 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # read -r var val 00:07:04.719 22:27:05 -- accel/accel.sh@21 -- # val=0x1 00:07:04.719 22:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # IFS=: 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # read -r var val 00:07:04.719 22:27:05 -- accel/accel.sh@21 -- # val= 00:07:04.719 22:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # IFS=: 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # read -r var val 00:07:04.719 22:27:05 -- accel/accel.sh@21 -- # val= 00:07:04.719 22:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # IFS=: 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # read -r var val 00:07:04.719 22:27:05 -- accel/accel.sh@21 -- # val=decompress 00:07:04.719 22:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.719 22:27:05 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # IFS=: 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # read -r var val 00:07:04.719 22:27:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.719 22:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # IFS=: 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # read -r var val 00:07:04.719 22:27:05 -- accel/accel.sh@21 -- # val= 00:07:04.719 22:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # IFS=: 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # read -r var val 00:07:04.719 22:27:05 -- accel/accel.sh@21 -- # val=software 00:07:04.719 22:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.719 22:27:05 -- accel/accel.sh@23 -- # accel_module=software 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # IFS=: 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # read -r var val 00:07:04.719 22:27:05 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:04.719 22:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # IFS=: 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # read -r var val 00:07:04.719 22:27:05 -- accel/accel.sh@21 -- # val=32 00:07:04.719 22:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # IFS=: 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # read -r var val 00:07:04.719 22:27:05 -- accel/accel.sh@21 -- # val=32 00:07:04.719 22:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # IFS=: 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # read -r var val 00:07:04.719 22:27:05 -- accel/accel.sh@21 -- # val=1 00:07:04.719 22:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # IFS=: 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # read -r var val 00:07:04.719 22:27:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:04.719 22:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # IFS=: 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # read -r var val 00:07:04.719 22:27:05 -- accel/accel.sh@21 -- # val=Yes 00:07:04.719 22:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # IFS=: 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # read -r var val 00:07:04.719 22:27:05 -- accel/accel.sh@21 -- # val= 00:07:04.719 22:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # IFS=: 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # read -r var val 00:07:04.719 22:27:05 -- accel/accel.sh@21 -- # val= 00:07:04.719 22:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # IFS=: 00:07:04.719 22:27:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 22:27:06 -- accel/accel.sh@21 -- # val= 00:07:06.095 22:27:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.095 22:27:06 -- accel/accel.sh@20 -- # IFS=: 00:07:06.095 22:27:06 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 22:27:06 -- accel/accel.sh@21 -- # val= 00:07:06.095 22:27:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.095 22:27:06 -- accel/accel.sh@20 -- # IFS=: 00:07:06.095 22:27:06 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 22:27:06 -- accel/accel.sh@21 -- # val= 00:07:06.095 22:27:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.095 22:27:06 -- accel/accel.sh@20 -- # IFS=: 00:07:06.095 22:27:06 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 22:27:06 -- accel/accel.sh@21 -- # val= 00:07:06.095 22:27:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.095 22:27:06 -- accel/accel.sh@20 -- # IFS=: 00:07:06.095 22:27:06 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 22:27:06 -- accel/accel.sh@21 -- # val= 00:07:06.095 22:27:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.095 22:27:06 -- accel/accel.sh@20 -- # IFS=: 00:07:06.095 22:27:06 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 22:27:06 -- accel/accel.sh@21 -- # val= 00:07:06.095 22:27:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.095 22:27:06 -- accel/accel.sh@20 -- # IFS=: 00:07:06.095 22:27:06 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 22:27:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.095 22:27:06 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:06.095 22:27:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.095 00:07:06.095 real 0m2.999s 00:07:06.095 user 0m2.528s 00:07:06.095 sys 0m0.273s 00:07:06.096 22:27:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.096 22:27:06 -- common/autotest_common.sh@10 -- # set +x 00:07:06.096 ************************************ 00:07:06.096 END TEST accel_decomp 00:07:06.096 ************************************ 00:07:06.096 22:27:06 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:06.096 22:27:06 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:06.096 22:27:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.096 22:27:06 -- common/autotest_common.sh@10 -- # set +x 00:07:06.096 ************************************ 00:07:06.096 START TEST accel_decmop_full 00:07:06.096 ************************************ 00:07:06.096 22:27:06 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:06.096 22:27:06 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.096 22:27:06 -- accel/accel.sh@17 -- # local accel_module 00:07:06.096 22:27:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:06.096 22:27:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:06.096 22:27:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.096 22:27:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.096 22:27:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.096 22:27:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.096 22:27:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.096 22:27:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.096 22:27:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.096 22:27:06 -- accel/accel.sh@42 -- # jq -r . 00:07:06.096 [2024-11-20 22:27:06.657866] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:06.096 [2024-11-20 22:27:06.657948] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71010 ] 00:07:06.096 [2024-11-20 22:27:06.793444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.354 [2024-11-20 22:27:06.861771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.730 22:27:08 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:07.730 00:07:07.730 SPDK Configuration: 00:07:07.730 Core mask: 0x1 00:07:07.730 00:07:07.730 Accel Perf Configuration: 00:07:07.730 Workload Type: decompress 00:07:07.730 Transfer size: 111250 bytes 00:07:07.730 Vector count 1 00:07:07.730 Module: software 00:07:07.730 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.730 Queue depth: 32 00:07:07.730 Allocate depth: 32 00:07:07.730 # threads/core: 1 00:07:07.730 Run time: 1 seconds 00:07:07.730 Verify: Yes 00:07:07.730 00:07:07.730 Running for 1 seconds... 00:07:07.730 00:07:07.730 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:07.730 ------------------------------------------------------------------------------------ 00:07:07.730 0,0 5760/s 237 MiB/s 0 0 00:07:07.730 ==================================================================================== 00:07:07.731 Total 5760/s 611 MiB/s 0 0' 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:07.731 22:27:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:07.731 22:27:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:07.731 22:27:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.731 22:27:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.731 22:27:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.731 22:27:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.731 22:27:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.731 22:27:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.731 22:27:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.731 22:27:08 -- accel/accel.sh@42 -- # jq -r . 00:07:07.731 [2024-11-20 22:27:08.153797] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:07.731 [2024-11-20 22:27:08.153880] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71030 ] 00:07:07.731 [2024-11-20 22:27:08.286016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.731 [2024-11-20 22:27:08.343540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.731 22:27:08 -- accel/accel.sh@21 -- # val= 00:07:07.731 22:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:07.731 22:27:08 -- accel/accel.sh@21 -- # val= 00:07:07.731 22:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:07.731 22:27:08 -- accel/accel.sh@21 -- # val= 00:07:07.731 22:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:07.731 22:27:08 -- accel/accel.sh@21 -- # val=0x1 00:07:07.731 22:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:07.731 22:27:08 -- accel/accel.sh@21 -- # val= 00:07:07.731 22:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:07.731 22:27:08 -- accel/accel.sh@21 -- # val= 00:07:07.731 22:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:07.731 22:27:08 -- accel/accel.sh@21 -- # val=decompress 00:07:07.731 22:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.731 22:27:08 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:07.731 22:27:08 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:07.731 22:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:07.731 22:27:08 -- accel/accel.sh@21 -- # val= 00:07:07.731 22:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:07.731 22:27:08 -- accel/accel.sh@21 -- # val=software 00:07:07.731 22:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.731 22:27:08 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:07.731 22:27:08 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.731 22:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:07.731 22:27:08 -- accel/accel.sh@21 -- # val=32 00:07:07.731 22:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:07.731 22:27:08 -- accel/accel.sh@21 -- # val=32 00:07:07.731 22:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:07.731 22:27:08 -- accel/accel.sh@21 -- # val=1 00:07:07.731 22:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:07.731 22:27:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.731 22:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:07.731 22:27:08 -- accel/accel.sh@21 -- # val=Yes 00:07:07.731 22:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:07.731 22:27:08 -- accel/accel.sh@21 -- # val= 00:07:07.731 22:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:07.731 22:27:08 -- accel/accel.sh@21 -- # val= 00:07:07.731 22:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # IFS=: 00:07:07.731 22:27:08 -- accel/accel.sh@20 -- # read -r var val 00:07:09.107 22:27:09 -- accel/accel.sh@21 -- # val= 00:07:09.107 22:27:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.107 22:27:09 -- accel/accel.sh@20 -- # IFS=: 00:07:09.107 22:27:09 -- accel/accel.sh@20 -- # read -r var val 00:07:09.107 22:27:09 -- accel/accel.sh@21 -- # val= 00:07:09.107 22:27:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.107 22:27:09 -- accel/accel.sh@20 -- # IFS=: 00:07:09.107 22:27:09 -- accel/accel.sh@20 -- # read -r var val 00:07:09.107 22:27:09 -- accel/accel.sh@21 -- # val= 00:07:09.107 22:27:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.107 22:27:09 -- accel/accel.sh@20 -- # IFS=: 00:07:09.107 22:27:09 -- accel/accel.sh@20 -- # read -r var val 00:07:09.107 22:27:09 -- accel/accel.sh@21 -- # val= 00:07:09.107 22:27:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.107 22:27:09 -- accel/accel.sh@20 -- # IFS=: 00:07:09.107 22:27:09 -- accel/accel.sh@20 -- # read -r var val 00:07:09.107 22:27:09 -- accel/accel.sh@21 -- # val= 00:07:09.107 22:27:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.107 22:27:09 -- accel/accel.sh@20 -- # IFS=: 00:07:09.107 22:27:09 -- accel/accel.sh@20 -- # read -r var val 00:07:09.107 22:27:09 -- accel/accel.sh@21 -- # val= 00:07:09.107 22:27:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.107 22:27:09 -- accel/accel.sh@20 -- # IFS=: 00:07:09.107 22:27:09 -- accel/accel.sh@20 -- # read -r var val 00:07:09.107 22:27:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.107 ************************************ 00:07:09.107 END TEST accel_decmop_full 00:07:09.107 ************************************ 00:07:09.107 22:27:09 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:09.107 22:27:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.107 00:07:09.107 real 0m3.011s 00:07:09.107 user 0m2.545s 00:07:09.107 sys 0m0.270s 00:07:09.107 22:27:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.107 22:27:09 -- common/autotest_common.sh@10 -- # set +x 00:07:09.107 22:27:09 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:09.107 22:27:09 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:09.107 22:27:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.107 22:27:09 -- common/autotest_common.sh@10 -- # set +x 00:07:09.107 ************************************ 00:07:09.107 START TEST accel_decomp_mcore 00:07:09.107 ************************************ 00:07:09.107 22:27:09 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:09.107 22:27:09 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.107 22:27:09 -- accel/accel.sh@17 -- # local accel_module 00:07:09.107 22:27:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:09.107 22:27:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:09.107 22:27:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.107 22:27:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.107 22:27:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.107 22:27:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.107 22:27:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.107 22:27:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.107 22:27:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.107 22:27:09 -- accel/accel.sh@42 -- # jq -r . 00:07:09.107 [2024-11-20 22:27:09.719777] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:09.107 [2024-11-20 22:27:09.719864] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71064 ] 00:07:09.366 [2024-11-20 22:27:09.854625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:09.366 [2024-11-20 22:27:09.924819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.366 [2024-11-20 22:27:09.924953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.366 [2024-11-20 22:27:09.925079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.366 [2024-11-20 22:27:09.925083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.773 22:27:11 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:10.773 00:07:10.773 SPDK Configuration: 00:07:10.773 Core mask: 0xf 00:07:10.773 00:07:10.773 Accel Perf Configuration: 00:07:10.773 Workload Type: decompress 00:07:10.773 Transfer size: 4096 bytes 00:07:10.773 Vector count 1 00:07:10.773 Module: software 00:07:10.773 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:10.773 Queue depth: 32 00:07:10.773 Allocate depth: 32 00:07:10.773 # threads/core: 1 00:07:10.773 Run time: 1 seconds 00:07:10.773 Verify: Yes 00:07:10.773 00:07:10.773 Running for 1 seconds... 00:07:10.773 00:07:10.773 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.773 ------------------------------------------------------------------------------------ 00:07:10.773 0,0 62688/s 115 MiB/s 0 0 00:07:10.773 3,0 56960/s 104 MiB/s 0 0 00:07:10.773 2,0 54528/s 100 MiB/s 0 0 00:07:10.773 1,0 55808/s 102 MiB/s 0 0 00:07:10.773 ==================================================================================== 00:07:10.773 Total 229984/s 898 MiB/s 0 0' 00:07:10.773 22:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:10.773 22:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:10.773 22:27:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:10.773 22:27:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:10.773 22:27:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.773 22:27:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.773 22:27:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.773 22:27:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.773 22:27:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.773 22:27:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.773 22:27:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.773 22:27:11 -- accel/accel.sh@42 -- # jq -r . 00:07:10.773 [2024-11-20 22:27:11.217852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:10.773 [2024-11-20 22:27:11.217939] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71087 ] 00:07:10.773 [2024-11-20 22:27:11.354372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:10.773 [2024-11-20 22:27:11.420083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.773 [2024-11-20 22:27:11.420228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.773 [2024-11-20 22:27:11.420363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.773 [2024-11-20 22:27:11.420741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.773 22:27:11 -- accel/accel.sh@21 -- # val= 00:07:10.773 22:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.773 22:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:10.773 22:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:10.773 22:27:11 -- accel/accel.sh@21 -- # val= 00:07:10.773 22:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.773 22:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:10.773 22:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:10.773 22:27:11 -- accel/accel.sh@21 -- # val= 00:07:10.773 22:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.773 22:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:10.773 22:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:10.773 22:27:11 -- accel/accel.sh@21 -- # val=0xf 00:07:10.773 22:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.773 22:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:10.773 22:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:10.773 22:27:11 -- accel/accel.sh@21 -- # val= 00:07:10.773 22:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.773 22:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:10.773 22:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:10.773 22:27:11 -- accel/accel.sh@21 -- # val= 00:07:10.773 22:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.773 22:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:10.773 22:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:10.773 22:27:11 -- accel/accel.sh@21 -- # val=decompress 00:07:10.773 22:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.774 22:27:11 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:10.774 22:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:10.774 22:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:10.774 22:27:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.774 22:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.774 22:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:10.774 22:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:10.774 22:27:11 -- accel/accel.sh@21 -- # val= 00:07:10.774 22:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.774 22:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:10.774 22:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:10.774 22:27:11 -- accel/accel.sh@21 -- # val=software 00:07:10.774 22:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.774 22:27:11 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.774 22:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:10.774 22:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:10.774 22:27:11 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:11.032 22:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.032 22:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:11.032 22:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:11.032 22:27:11 -- accel/accel.sh@21 -- # val=32 00:07:11.032 22:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.032 22:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:11.032 22:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:11.032 22:27:11 -- accel/accel.sh@21 -- # val=32 00:07:11.032 22:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.032 22:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:11.032 22:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:11.032 22:27:11 -- accel/accel.sh@21 -- # val=1 00:07:11.032 22:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.032 22:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:11.032 22:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:11.032 22:27:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:11.032 22:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.032 22:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:11.032 22:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:11.033 22:27:11 -- accel/accel.sh@21 -- # val=Yes 00:07:11.033 22:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.033 22:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:11.033 22:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:11.033 22:27:11 -- accel/accel.sh@21 -- # val= 00:07:11.033 22:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.033 22:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:11.033 22:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:11.033 22:27:11 -- accel/accel.sh@21 -- # val= 00:07:11.033 22:27:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.033 22:27:11 -- accel/accel.sh@20 -- # IFS=: 00:07:11.033 22:27:11 -- accel/accel.sh@20 -- # read -r var val 00:07:12.444 22:27:12 -- accel/accel.sh@21 -- # val= 00:07:12.444 22:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.444 22:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:12.444 22:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:12.444 22:27:12 -- accel/accel.sh@21 -- # val= 00:07:12.444 22:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.444 22:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:12.444 22:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:12.444 22:27:12 -- accel/accel.sh@21 -- # val= 00:07:12.444 22:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.444 22:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:12.444 22:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:12.444 22:27:12 -- accel/accel.sh@21 -- # val= 00:07:12.444 22:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.444 22:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:12.444 22:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:12.444 22:27:12 -- accel/accel.sh@21 -- # val= 00:07:12.444 22:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.444 22:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:12.444 22:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:12.444 22:27:12 -- accel/accel.sh@21 -- # val= 00:07:12.444 22:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.444 22:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:12.444 22:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:12.444 22:27:12 -- accel/accel.sh@21 -- # val= 00:07:12.444 22:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.444 22:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:12.444 22:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:12.444 22:27:12 -- accel/accel.sh@21 -- # val= 00:07:12.444 22:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.444 22:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:12.444 22:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:12.444 22:27:12 -- accel/accel.sh@21 -- # val= 00:07:12.444 22:27:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.444 22:27:12 -- accel/accel.sh@20 -- # IFS=: 00:07:12.444 22:27:12 -- accel/accel.sh@20 -- # read -r var val 00:07:12.444 22:27:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:12.444 22:27:12 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:12.444 22:27:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.444 00:07:12.444 real 0m3.025s 00:07:12.444 user 0m9.614s 00:07:12.444 sys 0m0.302s 00:07:12.444 22:27:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.444 22:27:12 -- common/autotest_common.sh@10 -- # set +x 00:07:12.444 ************************************ 00:07:12.444 END TEST accel_decomp_mcore 00:07:12.444 ************************************ 00:07:12.444 22:27:12 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:12.444 22:27:12 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:12.444 22:27:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.444 22:27:12 -- common/autotest_common.sh@10 -- # set +x 00:07:12.444 ************************************ 00:07:12.444 START TEST accel_decomp_full_mcore 00:07:12.444 ************************************ 00:07:12.444 22:27:12 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:12.444 22:27:12 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.444 22:27:12 -- accel/accel.sh@17 -- # local accel_module 00:07:12.444 22:27:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:12.444 22:27:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:12.444 22:27:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.444 22:27:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.444 22:27:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.444 22:27:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.444 22:27:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.444 22:27:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.444 22:27:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.444 22:27:12 -- accel/accel.sh@42 -- # jq -r . 00:07:12.444 [2024-11-20 22:27:12.792739] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:12.444 [2024-11-20 22:27:12.792836] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71124 ] 00:07:12.444 [2024-11-20 22:27:12.927036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.444 [2024-11-20 22:27:12.989890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.444 [2024-11-20 22:27:12.990035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.444 [2024-11-20 22:27:12.990382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.444 [2024-11-20 22:27:12.990390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.831 22:27:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:13.831 00:07:13.831 SPDK Configuration: 00:07:13.831 Core mask: 0xf 00:07:13.831 00:07:13.831 Accel Perf Configuration: 00:07:13.831 Workload Type: decompress 00:07:13.831 Transfer size: 111250 bytes 00:07:13.831 Vector count 1 00:07:13.831 Module: software 00:07:13.831 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:13.831 Queue depth: 32 00:07:13.831 Allocate depth: 32 00:07:13.831 # threads/core: 1 00:07:13.831 Run time: 1 seconds 00:07:13.831 Verify: Yes 00:07:13.831 00:07:13.831 Running for 1 seconds... 00:07:13.831 00:07:13.831 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.831 ------------------------------------------------------------------------------------ 00:07:13.831 0,0 5568/s 230 MiB/s 0 0 00:07:13.831 3,0 4928/s 203 MiB/s 0 0 00:07:13.831 2,0 5120/s 211 MiB/s 0 0 00:07:13.831 1,0 5152/s 212 MiB/s 0 0 00:07:13.831 ==================================================================================== 00:07:13.831 Total 20768/s 2203 MiB/s 0 0' 00:07:13.831 22:27:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.831 22:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:13.831 22:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:13.831 22:27:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.831 22:27:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.831 22:27:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.831 22:27:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.831 22:27:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.831 22:27:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.831 22:27:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.831 22:27:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.831 22:27:14 -- accel/accel.sh@42 -- # jq -r . 00:07:13.831 [2024-11-20 22:27:14.312453] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:13.831 [2024-11-20 22:27:14.312535] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71147 ] 00:07:13.831 [2024-11-20 22:27:14.442678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.831 [2024-11-20 22:27:14.508433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.831 [2024-11-20 22:27:14.508576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.831 [2024-11-20 22:27:14.508726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.831 [2024-11-20 22:27:14.508728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.090 22:27:14 -- accel/accel.sh@21 -- # val= 00:07:14.090 22:27:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.090 22:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:14.090 22:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:14.090 22:27:14 -- accel/accel.sh@21 -- # val= 00:07:14.090 22:27:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.090 22:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:14.090 22:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:14.090 22:27:14 -- accel/accel.sh@21 -- # val= 00:07:14.090 22:27:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.090 22:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:14.090 22:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:14.090 22:27:14 -- accel/accel.sh@21 -- # val=0xf 00:07:14.090 22:27:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.090 22:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:14.090 22:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:14.091 22:27:14 -- accel/accel.sh@21 -- # val= 00:07:14.091 22:27:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:14.091 22:27:14 -- accel/accel.sh@21 -- # val= 00:07:14.091 22:27:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:14.091 22:27:14 -- accel/accel.sh@21 -- # val=decompress 00:07:14.091 22:27:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.091 22:27:14 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:14.091 22:27:14 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:14.091 22:27:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:14.091 22:27:14 -- accel/accel.sh@21 -- # val= 00:07:14.091 22:27:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:14.091 22:27:14 -- accel/accel.sh@21 -- # val=software 00:07:14.091 22:27:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.091 22:27:14 -- accel/accel.sh@23 -- # accel_module=software 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:14.091 22:27:14 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:14.091 22:27:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:14.091 22:27:14 -- accel/accel.sh@21 -- # val=32 00:07:14.091 22:27:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:14.091 22:27:14 -- accel/accel.sh@21 -- # val=32 00:07:14.091 22:27:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:14.091 22:27:14 -- accel/accel.sh@21 -- # val=1 00:07:14.091 22:27:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:14.091 22:27:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:14.091 22:27:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:14.091 22:27:14 -- accel/accel.sh@21 -- # val=Yes 00:07:14.091 22:27:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:14.091 22:27:14 -- accel/accel.sh@21 -- # val= 00:07:14.091 22:27:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:14.091 22:27:14 -- accel/accel.sh@21 -- # val= 00:07:14.091 22:27:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # IFS=: 00:07:14.091 22:27:14 -- accel/accel.sh@20 -- # read -r var val 00:07:15.467 22:27:15 -- accel/accel.sh@21 -- # val= 00:07:15.467 22:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.467 22:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:15.467 22:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:15.467 22:27:15 -- accel/accel.sh@21 -- # val= 00:07:15.467 22:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.467 22:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:15.467 22:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:15.467 22:27:15 -- accel/accel.sh@21 -- # val= 00:07:15.467 22:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.467 22:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:15.467 22:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:15.467 22:27:15 -- accel/accel.sh@21 -- # val= 00:07:15.467 22:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.467 22:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:15.467 22:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:15.467 22:27:15 -- accel/accel.sh@21 -- # val= 00:07:15.467 22:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.467 22:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:15.467 22:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:15.467 22:27:15 -- accel/accel.sh@21 -- # val= 00:07:15.467 22:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.467 22:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:15.467 22:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:15.467 22:27:15 -- accel/accel.sh@21 -- # val= 00:07:15.467 22:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.467 22:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:15.467 22:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:15.467 22:27:15 -- accel/accel.sh@21 -- # val= 00:07:15.467 22:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.467 22:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:15.467 22:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:15.467 22:27:15 -- accel/accel.sh@21 -- # val= 00:07:15.467 22:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.467 22:27:15 -- accel/accel.sh@20 -- # IFS=: 00:07:15.467 22:27:15 -- accel/accel.sh@20 -- # read -r var val 00:07:15.467 22:27:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:15.467 ************************************ 00:07:15.467 END TEST accel_decomp_full_mcore 00:07:15.467 ************************************ 00:07:15.467 22:27:15 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:15.467 22:27:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.467 00:07:15.467 real 0m3.018s 00:07:15.467 user 0m9.668s 00:07:15.467 sys 0m0.298s 00:07:15.467 22:27:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.467 22:27:15 -- common/autotest_common.sh@10 -- # set +x 00:07:15.467 22:27:15 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:15.467 22:27:15 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:15.467 22:27:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.467 22:27:15 -- common/autotest_common.sh@10 -- # set +x 00:07:15.467 ************************************ 00:07:15.467 START TEST accel_decomp_mthread 00:07:15.467 ************************************ 00:07:15.467 22:27:15 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:15.467 22:27:15 -- accel/accel.sh@16 -- # local accel_opc 00:07:15.467 22:27:15 -- accel/accel.sh@17 -- # local accel_module 00:07:15.468 22:27:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:15.468 22:27:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:15.468 22:27:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.468 22:27:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.468 22:27:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.468 22:27:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.468 22:27:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.468 22:27:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.468 22:27:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.468 22:27:15 -- accel/accel.sh@42 -- # jq -r . 00:07:15.468 [2024-11-20 22:27:15.865113] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:15.468 [2024-11-20 22:27:15.865200] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71184 ] 00:07:15.468 [2024-11-20 22:27:15.993633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.468 [2024-11-20 22:27:16.058027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.843 22:27:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:16.843 00:07:16.843 SPDK Configuration: 00:07:16.843 Core mask: 0x1 00:07:16.843 00:07:16.843 Accel Perf Configuration: 00:07:16.843 Workload Type: decompress 00:07:16.843 Transfer size: 4096 bytes 00:07:16.843 Vector count 1 00:07:16.843 Module: software 00:07:16.843 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:16.843 Queue depth: 32 00:07:16.843 Allocate depth: 32 00:07:16.843 # threads/core: 2 00:07:16.843 Run time: 1 seconds 00:07:16.843 Verify: Yes 00:07:16.843 00:07:16.843 Running for 1 seconds... 00:07:16.843 00:07:16.843 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.843 ------------------------------------------------------------------------------------ 00:07:16.843 0,1 43392/s 79 MiB/s 0 0 00:07:16.843 0,0 43232/s 79 MiB/s 0 0 00:07:16.843 ==================================================================================== 00:07:16.843 Total 86624/s 338 MiB/s 0 0' 00:07:16.843 22:27:17 -- accel/accel.sh@20 -- # IFS=: 00:07:16.843 22:27:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:16.843 22:27:17 -- accel/accel.sh@20 -- # read -r var val 00:07:16.843 22:27:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:16.843 22:27:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.843 22:27:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.843 22:27:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.843 22:27:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.843 22:27:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.843 22:27:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.843 22:27:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.843 22:27:17 -- accel/accel.sh@42 -- # jq -r . 00:07:16.843 [2024-11-20 22:27:17.373687] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:16.843 [2024-11-20 22:27:17.373779] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71204 ] 00:07:16.843 [2024-11-20 22:27:17.509693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.843 [2024-11-20 22:27:17.572938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.102 22:27:17 -- accel/accel.sh@21 -- # val= 00:07:17.102 22:27:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # IFS=: 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # read -r var val 00:07:17.102 22:27:17 -- accel/accel.sh@21 -- # val= 00:07:17.102 22:27:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # IFS=: 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # read -r var val 00:07:17.102 22:27:17 -- accel/accel.sh@21 -- # val= 00:07:17.102 22:27:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # IFS=: 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # read -r var val 00:07:17.102 22:27:17 -- accel/accel.sh@21 -- # val=0x1 00:07:17.102 22:27:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # IFS=: 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # read -r var val 00:07:17.102 22:27:17 -- accel/accel.sh@21 -- # val= 00:07:17.102 22:27:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # IFS=: 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # read -r var val 00:07:17.102 22:27:17 -- accel/accel.sh@21 -- # val= 00:07:17.102 22:27:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # IFS=: 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # read -r var val 00:07:17.102 22:27:17 -- accel/accel.sh@21 -- # val=decompress 00:07:17.102 22:27:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.102 22:27:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # IFS=: 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # read -r var val 00:07:17.102 22:27:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:17.102 22:27:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # IFS=: 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # read -r var val 00:07:17.102 22:27:17 -- accel/accel.sh@21 -- # val= 00:07:17.102 22:27:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # IFS=: 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # read -r var val 00:07:17.102 22:27:17 -- accel/accel.sh@21 -- # val=software 00:07:17.102 22:27:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.102 22:27:17 -- accel/accel.sh@23 -- # accel_module=software 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # IFS=: 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # read -r var val 00:07:17.102 22:27:17 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:17.102 22:27:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # IFS=: 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # read -r var val 00:07:17.102 22:27:17 -- accel/accel.sh@21 -- # val=32 00:07:17.102 22:27:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # IFS=: 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # read -r var val 00:07:17.102 22:27:17 -- accel/accel.sh@21 -- # val=32 00:07:17.102 22:27:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # IFS=: 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # read -r var val 00:07:17.102 22:27:17 -- accel/accel.sh@21 -- # val=2 00:07:17.102 22:27:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # IFS=: 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # read -r var val 00:07:17.102 22:27:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:17.102 22:27:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # IFS=: 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # read -r var val 00:07:17.102 22:27:17 -- accel/accel.sh@21 -- # val=Yes 00:07:17.102 22:27:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # IFS=: 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # read -r var val 00:07:17.102 22:27:17 -- accel/accel.sh@21 -- # val= 00:07:17.102 22:27:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # IFS=: 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # read -r var val 00:07:17.102 22:27:17 -- accel/accel.sh@21 -- # val= 00:07:17.102 22:27:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # IFS=: 00:07:17.102 22:27:17 -- accel/accel.sh@20 -- # read -r var val 00:07:18.478 22:27:18 -- accel/accel.sh@21 -- # val= 00:07:18.478 22:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.478 22:27:18 -- accel/accel.sh@20 -- # IFS=: 00:07:18.478 22:27:18 -- accel/accel.sh@20 -- # read -r var val 00:07:18.478 22:27:18 -- accel/accel.sh@21 -- # val= 00:07:18.478 22:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.478 22:27:18 -- accel/accel.sh@20 -- # IFS=: 00:07:18.478 22:27:18 -- accel/accel.sh@20 -- # read -r var val 00:07:18.478 22:27:18 -- accel/accel.sh@21 -- # val= 00:07:18.478 22:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.478 22:27:18 -- accel/accel.sh@20 -- # IFS=: 00:07:18.478 22:27:18 -- accel/accel.sh@20 -- # read -r var val 00:07:18.478 22:27:18 -- accel/accel.sh@21 -- # val= 00:07:18.478 22:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.478 22:27:18 -- accel/accel.sh@20 -- # IFS=: 00:07:18.478 22:27:18 -- accel/accel.sh@20 -- # read -r var val 00:07:18.478 22:27:18 -- accel/accel.sh@21 -- # val= 00:07:18.478 22:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.478 22:27:18 -- accel/accel.sh@20 -- # IFS=: 00:07:18.478 22:27:18 -- accel/accel.sh@20 -- # read -r var val 00:07:18.478 22:27:18 -- accel/accel.sh@21 -- # val= 00:07:18.478 22:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.478 22:27:18 -- accel/accel.sh@20 -- # IFS=: 00:07:18.478 22:27:18 -- accel/accel.sh@20 -- # read -r var val 00:07:18.478 22:27:18 -- accel/accel.sh@21 -- # val= 00:07:18.478 22:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.478 22:27:18 -- accel/accel.sh@20 -- # IFS=: 00:07:18.478 22:27:18 -- accel/accel.sh@20 -- # read -r var val 00:07:18.478 22:27:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:18.478 22:27:18 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:18.478 ************************************ 00:07:18.478 END TEST accel_decomp_mthread 00:07:18.478 ************************************ 00:07:18.478 22:27:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.478 00:07:18.478 real 0m2.998s 00:07:18.478 user 0m2.524s 00:07:18.478 sys 0m0.270s 00:07:18.478 22:27:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.478 22:27:18 -- common/autotest_common.sh@10 -- # set +x 00:07:18.478 22:27:18 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:18.478 22:27:18 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:18.478 22:27:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.478 22:27:18 -- common/autotest_common.sh@10 -- # set +x 00:07:18.478 ************************************ 00:07:18.478 START TEST accel_deomp_full_mthread 00:07:18.478 ************************************ 00:07:18.478 22:27:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:18.478 22:27:18 -- accel/accel.sh@16 -- # local accel_opc 00:07:18.478 22:27:18 -- accel/accel.sh@17 -- # local accel_module 00:07:18.478 22:27:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:18.478 22:27:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:18.478 22:27:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.478 22:27:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.478 22:27:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.478 22:27:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.478 22:27:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.478 22:27:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.478 22:27:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.478 22:27:18 -- accel/accel.sh@42 -- # jq -r . 00:07:18.478 [2024-11-20 22:27:18.914054] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:18.478 [2024-11-20 22:27:18.914171] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71238 ] 00:07:18.478 [2024-11-20 22:27:19.049823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.478 [2024-11-20 22:27:19.118052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.853 22:27:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:19.853 00:07:19.853 SPDK Configuration: 00:07:19.853 Core mask: 0x1 00:07:19.853 00:07:19.853 Accel Perf Configuration: 00:07:19.853 Workload Type: decompress 00:07:19.853 Transfer size: 111250 bytes 00:07:19.853 Vector count 1 00:07:19.853 Module: software 00:07:19.853 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.853 Queue depth: 32 00:07:19.853 Allocate depth: 32 00:07:19.853 # threads/core: 2 00:07:19.853 Run time: 1 seconds 00:07:19.853 Verify: Yes 00:07:19.853 00:07:19.853 Running for 1 seconds... 00:07:19.853 00:07:19.853 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:19.853 ------------------------------------------------------------------------------------ 00:07:19.853 0,1 2880/s 118 MiB/s 0 0 00:07:19.853 0,0 2848/s 117 MiB/s 0 0 00:07:19.853 ==================================================================================== 00:07:19.853 Total 5728/s 607 MiB/s 0 0' 00:07:19.853 22:27:20 -- accel/accel.sh@20 -- # IFS=: 00:07:19.853 22:27:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:19.853 22:27:20 -- accel/accel.sh@20 -- # read -r var val 00:07:19.853 22:27:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:19.853 22:27:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.853 22:27:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.853 22:27:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.853 22:27:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.853 22:27:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.853 22:27:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.853 22:27:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.853 22:27:20 -- accel/accel.sh@42 -- # jq -r . 00:07:19.853 [2024-11-20 22:27:20.421137] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:19.853 [2024-11-20 22:27:20.421224] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71258 ] 00:07:19.854 [2024-11-20 22:27:20.554264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.113 [2024-11-20 22:27:20.618121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.113 22:27:20 -- accel/accel.sh@21 -- # val= 00:07:20.113 22:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # IFS=: 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # read -r var val 00:07:20.113 22:27:20 -- accel/accel.sh@21 -- # val= 00:07:20.113 22:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # IFS=: 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # read -r var val 00:07:20.113 22:27:20 -- accel/accel.sh@21 -- # val= 00:07:20.113 22:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # IFS=: 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # read -r var val 00:07:20.113 22:27:20 -- accel/accel.sh@21 -- # val=0x1 00:07:20.113 22:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # IFS=: 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # read -r var val 00:07:20.113 22:27:20 -- accel/accel.sh@21 -- # val= 00:07:20.113 22:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # IFS=: 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # read -r var val 00:07:20.113 22:27:20 -- accel/accel.sh@21 -- # val= 00:07:20.113 22:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # IFS=: 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # read -r var val 00:07:20.113 22:27:20 -- accel/accel.sh@21 -- # val=decompress 00:07:20.113 22:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.113 22:27:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # IFS=: 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # read -r var val 00:07:20.113 22:27:20 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:20.113 22:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # IFS=: 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # read -r var val 00:07:20.113 22:27:20 -- accel/accel.sh@21 -- # val= 00:07:20.113 22:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # IFS=: 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # read -r var val 00:07:20.113 22:27:20 -- accel/accel.sh@21 -- # val=software 00:07:20.113 22:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.113 22:27:20 -- accel/accel.sh@23 -- # accel_module=software 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # IFS=: 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # read -r var val 00:07:20.113 22:27:20 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:20.113 22:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # IFS=: 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # read -r var val 00:07:20.113 22:27:20 -- accel/accel.sh@21 -- # val=32 00:07:20.113 22:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # IFS=: 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # read -r var val 00:07:20.113 22:27:20 -- accel/accel.sh@21 -- # val=32 00:07:20.113 22:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # IFS=: 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # read -r var val 00:07:20.113 22:27:20 -- accel/accel.sh@21 -- # val=2 00:07:20.113 22:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # IFS=: 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # read -r var val 00:07:20.113 22:27:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:20.113 22:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # IFS=: 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # read -r var val 00:07:20.113 22:27:20 -- accel/accel.sh@21 -- # val=Yes 00:07:20.113 22:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # IFS=: 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # read -r var val 00:07:20.113 22:27:20 -- accel/accel.sh@21 -- # val= 00:07:20.113 22:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # IFS=: 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # read -r var val 00:07:20.113 22:27:20 -- accel/accel.sh@21 -- # val= 00:07:20.113 22:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # IFS=: 00:07:20.113 22:27:20 -- accel/accel.sh@20 -- # read -r var val 00:07:21.489 22:27:21 -- accel/accel.sh@21 -- # val= 00:07:21.490 22:27:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.490 22:27:21 -- accel/accel.sh@20 -- # IFS=: 00:07:21.490 22:27:21 -- accel/accel.sh@20 -- # read -r var val 00:07:21.490 22:27:21 -- accel/accel.sh@21 -- # val= 00:07:21.490 22:27:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.490 22:27:21 -- accel/accel.sh@20 -- # IFS=: 00:07:21.490 22:27:21 -- accel/accel.sh@20 -- # read -r var val 00:07:21.490 22:27:21 -- accel/accel.sh@21 -- # val= 00:07:21.490 22:27:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.490 22:27:21 -- accel/accel.sh@20 -- # IFS=: 00:07:21.490 22:27:21 -- accel/accel.sh@20 -- # read -r var val 00:07:21.490 22:27:21 -- accel/accel.sh@21 -- # val= 00:07:21.490 22:27:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.490 22:27:21 -- accel/accel.sh@20 -- # IFS=: 00:07:21.490 22:27:21 -- accel/accel.sh@20 -- # read -r var val 00:07:21.490 22:27:21 -- accel/accel.sh@21 -- # val= 00:07:21.490 22:27:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.490 22:27:21 -- accel/accel.sh@20 -- # IFS=: 00:07:21.490 22:27:21 -- accel/accel.sh@20 -- # read -r var val 00:07:21.490 22:27:21 -- accel/accel.sh@21 -- # val= 00:07:21.490 22:27:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.490 22:27:21 -- accel/accel.sh@20 -- # IFS=: 00:07:21.490 22:27:21 -- accel/accel.sh@20 -- # read -r var val 00:07:21.490 22:27:21 -- accel/accel.sh@21 -- # val= 00:07:21.490 22:27:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.490 22:27:21 -- accel/accel.sh@20 -- # IFS=: 00:07:21.490 22:27:21 -- accel/accel.sh@20 -- # read -r var val 00:07:21.490 22:27:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:21.490 22:27:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:21.490 22:27:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.490 00:07:21.490 real 0m3.020s 00:07:21.490 user 0m2.535s 00:07:21.490 sys 0m0.277s 00:07:21.490 22:27:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.490 22:27:21 -- common/autotest_common.sh@10 -- # set +x 00:07:21.490 ************************************ 00:07:21.490 END TEST accel_deomp_full_mthread 00:07:21.490 ************************************ 00:07:21.490 22:27:21 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:21.490 22:27:21 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:21.490 22:27:21 -- accel/accel.sh@129 -- # build_accel_config 00:07:21.490 22:27:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:21.490 22:27:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.490 22:27:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.490 22:27:21 -- common/autotest_common.sh@10 -- # set +x 00:07:21.490 22:27:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.490 22:27:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.490 22:27:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.490 22:27:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.490 22:27:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.490 22:27:21 -- accel/accel.sh@42 -- # jq -r . 00:07:21.490 ************************************ 00:07:21.490 START TEST accel_dif_functional_tests 00:07:21.490 ************************************ 00:07:21.490 22:27:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:21.490 [2024-11-20 22:27:22.015141] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:21.490 [2024-11-20 22:27:22.015408] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71288 ] 00:07:21.490 [2024-11-20 22:27:22.151113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:21.490 [2024-11-20 22:27:22.219934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.490 [2024-11-20 22:27:22.220086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.749 [2024-11-20 22:27:22.220095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.749 00:07:21.749 00:07:21.749 CUnit - A unit testing framework for C - Version 2.1-3 00:07:21.749 http://cunit.sourceforge.net/ 00:07:21.749 00:07:21.749 00:07:21.749 Suite: accel_dif 00:07:21.749 Test: verify: DIF generated, GUARD check ...passed 00:07:21.749 Test: verify: DIF generated, APPTAG check ...passed 00:07:21.749 Test: verify: DIF generated, REFTAG check ...passed 00:07:21.749 Test: verify: DIF not generated, GUARD check ...[2024-11-20 22:27:22.338389] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:21.749 [2024-11-20 22:27:22.338625] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:21.749 passed 00:07:21.749 Test: verify: DIF not generated, APPTAG check ...[2024-11-20 22:27:22.338896] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:21.749 [2024-11-20 22:27:22.339078] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:21.749 passed 00:07:21.749 Test: verify: DIF not generated, REFTAG check ...[2024-11-20 22:27:22.339236] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:21.749 [2024-11-20 22:27:22.339496] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:21.749 passed 00:07:21.749 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:21.749 Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-20 22:27:22.339911] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:21.749 passed 00:07:21.749 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:21.749 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:21.749 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:21.749 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-11-20 22:27:22.340897] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:21.749 passed 00:07:21.749 Test: generate copy: DIF generated, GUARD check ...passed 00:07:21.749 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:21.749 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:21.749 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:21.749 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:21.749 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:21.749 Test: generate copy: iovecs-len validate ...[2024-11-20 22:27:22.342326] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned passed 00:07:21.749 Test: generate copy: buffer alignment validate ...passed 00:07:21.749 00:07:21.749 Run Summary: Type Total Ran Passed Failed Inactive 00:07:21.749 suites 1 1 n/a 0 0 00:07:21.749 tests 20 20 20 0 0 00:07:21.749 asserts 204 204 204 0 n/a 00:07:21.749 00:07:21.749 Elapsed time = 0.007 seconds 00:07:21.749 with block_size. 00:07:22.008 00:07:22.008 real 0m0.615s 00:07:22.008 user 0m0.901s 00:07:22.008 sys 0m0.183s 00:07:22.008 22:27:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.008 22:27:22 -- common/autotest_common.sh@10 -- # set +x 00:07:22.008 ************************************ 00:07:22.008 END TEST accel_dif_functional_tests 00:07:22.008 ************************************ 00:07:22.008 00:07:22.008 real 1m4.747s 00:07:22.008 user 1m8.465s 00:07:22.008 sys 0m7.350s 00:07:22.008 22:27:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.008 22:27:22 -- common/autotest_common.sh@10 -- # set +x 00:07:22.008 ************************************ 00:07:22.008 END TEST accel 00:07:22.008 ************************************ 00:07:22.008 22:27:22 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:22.008 22:27:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:22.008 22:27:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.008 22:27:22 -- common/autotest_common.sh@10 -- # set +x 00:07:22.008 ************************************ 00:07:22.008 START TEST accel_rpc 00:07:22.008 ************************************ 00:07:22.008 22:27:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:22.268 * Looking for test storage... 00:07:22.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:22.268 22:27:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:22.268 22:27:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:22.268 22:27:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:22.268 22:27:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:22.268 22:27:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:22.268 22:27:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:22.268 22:27:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:22.268 22:27:22 -- scripts/common.sh@335 -- # IFS=.-: 00:07:22.268 22:27:22 -- scripts/common.sh@335 -- # read -ra ver1 00:07:22.268 22:27:22 -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.268 22:27:22 -- scripts/common.sh@336 -- # read -ra ver2 00:07:22.268 22:27:22 -- scripts/common.sh@337 -- # local 'op=<' 00:07:22.268 22:27:22 -- scripts/common.sh@339 -- # ver1_l=2 00:07:22.268 22:27:22 -- scripts/common.sh@340 -- # ver2_l=1 00:07:22.268 22:27:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:22.268 22:27:22 -- scripts/common.sh@343 -- # case "$op" in 00:07:22.268 22:27:22 -- scripts/common.sh@344 -- # : 1 00:07:22.268 22:27:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:22.268 22:27:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.268 22:27:22 -- scripts/common.sh@364 -- # decimal 1 00:07:22.268 22:27:22 -- scripts/common.sh@352 -- # local d=1 00:07:22.268 22:27:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.268 22:27:22 -- scripts/common.sh@354 -- # echo 1 00:07:22.268 22:27:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:22.268 22:27:22 -- scripts/common.sh@365 -- # decimal 2 00:07:22.268 22:27:22 -- scripts/common.sh@352 -- # local d=2 00:07:22.268 22:27:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.268 22:27:22 -- scripts/common.sh@354 -- # echo 2 00:07:22.268 22:27:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:22.268 22:27:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:22.268 22:27:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:22.268 22:27:22 -- scripts/common.sh@367 -- # return 0 00:07:22.268 22:27:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.268 22:27:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:22.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.268 --rc genhtml_branch_coverage=1 00:07:22.268 --rc genhtml_function_coverage=1 00:07:22.268 --rc genhtml_legend=1 00:07:22.268 --rc geninfo_all_blocks=1 00:07:22.268 --rc geninfo_unexecuted_blocks=1 00:07:22.268 00:07:22.268 ' 00:07:22.268 22:27:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:22.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.268 --rc genhtml_branch_coverage=1 00:07:22.268 --rc genhtml_function_coverage=1 00:07:22.268 --rc genhtml_legend=1 00:07:22.268 --rc geninfo_all_blocks=1 00:07:22.268 --rc geninfo_unexecuted_blocks=1 00:07:22.268 00:07:22.268 ' 00:07:22.268 22:27:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:22.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.268 --rc genhtml_branch_coverage=1 00:07:22.268 --rc genhtml_function_coverage=1 00:07:22.268 --rc genhtml_legend=1 00:07:22.268 --rc geninfo_all_blocks=1 00:07:22.268 --rc geninfo_unexecuted_blocks=1 00:07:22.268 00:07:22.268 ' 00:07:22.268 22:27:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:22.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.268 --rc genhtml_branch_coverage=1 00:07:22.268 --rc genhtml_function_coverage=1 00:07:22.268 --rc genhtml_legend=1 00:07:22.268 --rc geninfo_all_blocks=1 00:07:22.268 --rc geninfo_unexecuted_blocks=1 00:07:22.268 00:07:22.268 ' 00:07:22.268 22:27:22 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:22.268 22:27:22 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=71365 00:07:22.268 22:27:22 -- accel/accel_rpc.sh@15 -- # waitforlisten 71365 00:07:22.268 22:27:22 -- common/autotest_common.sh@829 -- # '[' -z 71365 ']' 00:07:22.268 22:27:22 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:22.268 22:27:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.268 22:27:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.268 22:27:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.268 22:27:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.268 22:27:22 -- common/autotest_common.sh@10 -- # set +x 00:07:22.268 [2024-11-20 22:27:22.895414] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:22.268 [2024-11-20 22:27:22.895523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71365 ] 00:07:22.527 [2024-11-20 22:27:23.028939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.527 [2024-11-20 22:27:23.094862] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:22.527 [2024-11-20 22:27:23.095066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.527 22:27:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.527 22:27:23 -- common/autotest_common.sh@862 -- # return 0 00:07:22.527 22:27:23 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:22.527 22:27:23 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:22.527 22:27:23 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:22.527 22:27:23 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:22.527 22:27:23 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:22.527 22:27:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:22.527 22:27:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.527 22:27:23 -- common/autotest_common.sh@10 -- # set +x 00:07:22.527 ************************************ 00:07:22.527 START TEST accel_assign_opcode 00:07:22.527 ************************************ 00:07:22.527 22:27:23 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:22.527 22:27:23 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:22.527 22:27:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.527 22:27:23 -- common/autotest_common.sh@10 -- # set +x 00:07:22.527 [2024-11-20 22:27:23.159533] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:22.527 22:27:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.527 22:27:23 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:22.527 22:27:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.527 22:27:23 -- common/autotest_common.sh@10 -- # set +x 00:07:22.527 [2024-11-20 22:27:23.167519] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:22.527 22:27:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.527 22:27:23 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:22.527 22:27:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.527 22:27:23 -- common/autotest_common.sh@10 -- # set +x 00:07:22.787 22:27:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.787 22:27:23 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:22.787 22:27:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.787 22:27:23 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:22.787 22:27:23 -- common/autotest_common.sh@10 -- # set +x 00:07:22.787 22:27:23 -- accel/accel_rpc.sh@42 -- # grep software 00:07:22.787 22:27:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.787 software 00:07:22.787 00:07:22.787 real 0m0.359s 00:07:22.787 user 0m0.058s 00:07:22.787 sys 0m0.010s 00:07:22.787 22:27:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.787 ************************************ 00:07:22.787 22:27:23 -- common/autotest_common.sh@10 -- # set +x 00:07:22.787 END TEST accel_assign_opcode 00:07:22.787 ************************************ 00:07:23.046 22:27:23 -- accel/accel_rpc.sh@55 -- # killprocess 71365 00:07:23.046 22:27:23 -- common/autotest_common.sh@936 -- # '[' -z 71365 ']' 00:07:23.046 22:27:23 -- common/autotest_common.sh@940 -- # kill -0 71365 00:07:23.046 22:27:23 -- common/autotest_common.sh@941 -- # uname 00:07:23.046 22:27:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:23.046 22:27:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71365 00:07:23.046 22:27:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:23.046 killing process with pid 71365 00:07:23.046 22:27:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:23.046 22:27:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71365' 00:07:23.046 22:27:23 -- common/autotest_common.sh@955 -- # kill 71365 00:07:23.046 22:27:23 -- common/autotest_common.sh@960 -- # wait 71365 00:07:23.615 00:07:23.615 real 0m1.428s 00:07:23.615 user 0m1.249s 00:07:23.615 sys 0m0.509s 00:07:23.615 22:27:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:23.615 22:27:24 -- common/autotest_common.sh@10 -- # set +x 00:07:23.615 ************************************ 00:07:23.615 END TEST accel_rpc 00:07:23.615 ************************************ 00:07:23.615 22:27:24 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:23.615 22:27:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:23.615 22:27:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.615 22:27:24 -- common/autotest_common.sh@10 -- # set +x 00:07:23.615 ************************************ 00:07:23.615 START TEST app_cmdline 00:07:23.615 ************************************ 00:07:23.615 22:27:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:23.615 * Looking for test storage... 00:07:23.615 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:23.615 22:27:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:23.615 22:27:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:23.615 22:27:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:23.615 22:27:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:23.615 22:27:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:23.615 22:27:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:23.615 22:27:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:23.615 22:27:24 -- scripts/common.sh@335 -- # IFS=.-: 00:07:23.615 22:27:24 -- scripts/common.sh@335 -- # read -ra ver1 00:07:23.615 22:27:24 -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.615 22:27:24 -- scripts/common.sh@336 -- # read -ra ver2 00:07:23.615 22:27:24 -- scripts/common.sh@337 -- # local 'op=<' 00:07:23.615 22:27:24 -- scripts/common.sh@339 -- # ver1_l=2 00:07:23.615 22:27:24 -- scripts/common.sh@340 -- # ver2_l=1 00:07:23.615 22:27:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:23.615 22:27:24 -- scripts/common.sh@343 -- # case "$op" in 00:07:23.615 22:27:24 -- scripts/common.sh@344 -- # : 1 00:07:23.615 22:27:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:23.615 22:27:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.615 22:27:24 -- scripts/common.sh@364 -- # decimal 1 00:07:23.615 22:27:24 -- scripts/common.sh@352 -- # local d=1 00:07:23.615 22:27:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.615 22:27:24 -- scripts/common.sh@354 -- # echo 1 00:07:23.615 22:27:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:23.615 22:27:24 -- scripts/common.sh@365 -- # decimal 2 00:07:23.615 22:27:24 -- scripts/common.sh@352 -- # local d=2 00:07:23.615 22:27:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.615 22:27:24 -- scripts/common.sh@354 -- # echo 2 00:07:23.615 22:27:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:23.615 22:27:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:23.615 22:27:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:23.615 22:27:24 -- scripts/common.sh@367 -- # return 0 00:07:23.615 22:27:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.615 22:27:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:23.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.615 --rc genhtml_branch_coverage=1 00:07:23.615 --rc genhtml_function_coverage=1 00:07:23.615 --rc genhtml_legend=1 00:07:23.615 --rc geninfo_all_blocks=1 00:07:23.615 --rc geninfo_unexecuted_blocks=1 00:07:23.615 00:07:23.615 ' 00:07:23.615 22:27:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:23.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.615 --rc genhtml_branch_coverage=1 00:07:23.615 --rc genhtml_function_coverage=1 00:07:23.615 --rc genhtml_legend=1 00:07:23.615 --rc geninfo_all_blocks=1 00:07:23.615 --rc geninfo_unexecuted_blocks=1 00:07:23.615 00:07:23.615 ' 00:07:23.615 22:27:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:23.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.615 --rc genhtml_branch_coverage=1 00:07:23.615 --rc genhtml_function_coverage=1 00:07:23.615 --rc genhtml_legend=1 00:07:23.615 --rc geninfo_all_blocks=1 00:07:23.615 --rc geninfo_unexecuted_blocks=1 00:07:23.615 00:07:23.615 ' 00:07:23.615 22:27:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:23.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.615 --rc genhtml_branch_coverage=1 00:07:23.615 --rc genhtml_function_coverage=1 00:07:23.615 --rc genhtml_legend=1 00:07:23.615 --rc geninfo_all_blocks=1 00:07:23.615 --rc geninfo_unexecuted_blocks=1 00:07:23.615 00:07:23.615 ' 00:07:23.615 22:27:24 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:23.615 22:27:24 -- app/cmdline.sh@17 -- # spdk_tgt_pid=71469 00:07:23.615 22:27:24 -- app/cmdline.sh@18 -- # waitforlisten 71469 00:07:23.615 22:27:24 -- common/autotest_common.sh@829 -- # '[' -z 71469 ']' 00:07:23.615 22:27:24 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:23.615 22:27:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.615 22:27:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.615 22:27:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.615 22:27:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.615 22:27:24 -- common/autotest_common.sh@10 -- # set +x 00:07:23.874 [2024-11-20 22:27:24.381917] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:23.874 [2024-11-20 22:27:24.382020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71469 ] 00:07:23.874 [2024-11-20 22:27:24.518512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.874 [2024-11-20 22:27:24.587327] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:23.874 [2024-11-20 22:27:24.587495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.811 22:27:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:24.811 22:27:25 -- common/autotest_common.sh@862 -- # return 0 00:07:24.811 22:27:25 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:25.070 { 00:07:25.070 "fields": { 00:07:25.070 "commit": "c13c99a5e", 00:07:25.070 "major": 24, 00:07:25.070 "minor": 1, 00:07:25.070 "patch": 1, 00:07:25.070 "suffix": "-pre" 00:07:25.070 }, 00:07:25.070 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e" 00:07:25.070 } 00:07:25.070 22:27:25 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:25.070 22:27:25 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:25.070 22:27:25 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:25.070 22:27:25 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:25.070 22:27:25 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:25.070 22:27:25 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:25.070 22:27:25 -- app/cmdline.sh@26 -- # sort 00:07:25.070 22:27:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.070 22:27:25 -- common/autotest_common.sh@10 -- # set +x 00:07:25.070 22:27:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.070 22:27:25 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:25.070 22:27:25 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:25.070 22:27:25 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:25.070 22:27:25 -- common/autotest_common.sh@650 -- # local es=0 00:07:25.070 22:27:25 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:25.070 22:27:25 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:25.070 22:27:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.070 22:27:25 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:25.070 22:27:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.070 22:27:25 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:25.070 22:27:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.070 22:27:25 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:25.070 22:27:25 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:25.070 22:27:25 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:25.329 2024/11/20 22:27:25 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:25.329 request: 00:07:25.329 { 00:07:25.329 "method": "env_dpdk_get_mem_stats", 00:07:25.329 "params": {} 00:07:25.329 } 00:07:25.329 Got JSON-RPC error response 00:07:25.329 GoRPCClient: error on JSON-RPC call 00:07:25.329 22:27:25 -- common/autotest_common.sh@653 -- # es=1 00:07:25.329 22:27:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.329 22:27:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:25.329 22:27:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.329 22:27:25 -- app/cmdline.sh@1 -- # killprocess 71469 00:07:25.329 22:27:25 -- common/autotest_common.sh@936 -- # '[' -z 71469 ']' 00:07:25.330 22:27:25 -- common/autotest_common.sh@940 -- # kill -0 71469 00:07:25.330 22:27:25 -- common/autotest_common.sh@941 -- # uname 00:07:25.330 22:27:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:25.330 22:27:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71469 00:07:25.330 22:27:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:25.330 killing process with pid 71469 00:07:25.330 22:27:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:25.330 22:27:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71469' 00:07:25.330 22:27:25 -- common/autotest_common.sh@955 -- # kill 71469 00:07:25.330 22:27:25 -- common/autotest_common.sh@960 -- # wait 71469 00:07:25.896 00:07:25.896 real 0m2.315s 00:07:25.896 user 0m2.800s 00:07:25.896 sys 0m0.558s 00:07:25.896 22:27:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.896 ************************************ 00:07:25.896 END TEST app_cmdline 00:07:25.896 ************************************ 00:07:25.896 22:27:26 -- common/autotest_common.sh@10 -- # set +x 00:07:25.896 22:27:26 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:25.896 22:27:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:25.896 22:27:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.896 22:27:26 -- common/autotest_common.sh@10 -- # set +x 00:07:25.896 ************************************ 00:07:25.896 START TEST version 00:07:25.896 ************************************ 00:07:25.896 22:27:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:25.896 * Looking for test storage... 00:07:25.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:25.896 22:27:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:25.896 22:27:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:25.896 22:27:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:26.156 22:27:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:26.156 22:27:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:26.156 22:27:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:26.156 22:27:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:26.156 22:27:26 -- scripts/common.sh@335 -- # IFS=.-: 00:07:26.156 22:27:26 -- scripts/common.sh@335 -- # read -ra ver1 00:07:26.156 22:27:26 -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.156 22:27:26 -- scripts/common.sh@336 -- # read -ra ver2 00:07:26.156 22:27:26 -- scripts/common.sh@337 -- # local 'op=<' 00:07:26.156 22:27:26 -- scripts/common.sh@339 -- # ver1_l=2 00:07:26.156 22:27:26 -- scripts/common.sh@340 -- # ver2_l=1 00:07:26.156 22:27:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:26.156 22:27:26 -- scripts/common.sh@343 -- # case "$op" in 00:07:26.156 22:27:26 -- scripts/common.sh@344 -- # : 1 00:07:26.156 22:27:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:26.156 22:27:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.156 22:27:26 -- scripts/common.sh@364 -- # decimal 1 00:07:26.156 22:27:26 -- scripts/common.sh@352 -- # local d=1 00:07:26.156 22:27:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.156 22:27:26 -- scripts/common.sh@354 -- # echo 1 00:07:26.156 22:27:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:26.156 22:27:26 -- scripts/common.sh@365 -- # decimal 2 00:07:26.156 22:27:26 -- scripts/common.sh@352 -- # local d=2 00:07:26.156 22:27:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.156 22:27:26 -- scripts/common.sh@354 -- # echo 2 00:07:26.156 22:27:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:26.156 22:27:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:26.156 22:27:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:26.156 22:27:26 -- scripts/common.sh@367 -- # return 0 00:07:26.156 22:27:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.156 22:27:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:26.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.156 --rc genhtml_branch_coverage=1 00:07:26.156 --rc genhtml_function_coverage=1 00:07:26.156 --rc genhtml_legend=1 00:07:26.156 --rc geninfo_all_blocks=1 00:07:26.156 --rc geninfo_unexecuted_blocks=1 00:07:26.156 00:07:26.156 ' 00:07:26.156 22:27:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:26.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.156 --rc genhtml_branch_coverage=1 00:07:26.156 --rc genhtml_function_coverage=1 00:07:26.156 --rc genhtml_legend=1 00:07:26.156 --rc geninfo_all_blocks=1 00:07:26.156 --rc geninfo_unexecuted_blocks=1 00:07:26.156 00:07:26.156 ' 00:07:26.156 22:27:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:26.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.156 --rc genhtml_branch_coverage=1 00:07:26.156 --rc genhtml_function_coverage=1 00:07:26.156 --rc genhtml_legend=1 00:07:26.156 --rc geninfo_all_blocks=1 00:07:26.156 --rc geninfo_unexecuted_blocks=1 00:07:26.156 00:07:26.156 ' 00:07:26.156 22:27:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:26.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.156 --rc genhtml_branch_coverage=1 00:07:26.156 --rc genhtml_function_coverage=1 00:07:26.156 --rc genhtml_legend=1 00:07:26.156 --rc geninfo_all_blocks=1 00:07:26.156 --rc geninfo_unexecuted_blocks=1 00:07:26.156 00:07:26.156 ' 00:07:26.156 22:27:26 -- app/version.sh@17 -- # get_header_version major 00:07:26.156 22:27:26 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:26.156 22:27:26 -- app/version.sh@14 -- # cut -f2 00:07:26.156 22:27:26 -- app/version.sh@14 -- # tr -d '"' 00:07:26.156 22:27:26 -- app/version.sh@17 -- # major=24 00:07:26.156 22:27:26 -- app/version.sh@18 -- # get_header_version minor 00:07:26.156 22:27:26 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:26.156 22:27:26 -- app/version.sh@14 -- # cut -f2 00:07:26.156 22:27:26 -- app/version.sh@14 -- # tr -d '"' 00:07:26.156 22:27:26 -- app/version.sh@18 -- # minor=1 00:07:26.156 22:27:26 -- app/version.sh@19 -- # get_header_version patch 00:07:26.156 22:27:26 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:26.156 22:27:26 -- app/version.sh@14 -- # cut -f2 00:07:26.156 22:27:26 -- app/version.sh@14 -- # tr -d '"' 00:07:26.156 22:27:26 -- app/version.sh@19 -- # patch=1 00:07:26.156 22:27:26 -- app/version.sh@20 -- # get_header_version suffix 00:07:26.156 22:27:26 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:26.156 22:27:26 -- app/version.sh@14 -- # cut -f2 00:07:26.156 22:27:26 -- app/version.sh@14 -- # tr -d '"' 00:07:26.156 22:27:26 -- app/version.sh@20 -- # suffix=-pre 00:07:26.156 22:27:26 -- app/version.sh@22 -- # version=24.1 00:07:26.156 22:27:26 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:26.156 22:27:26 -- app/version.sh@25 -- # version=24.1.1 00:07:26.156 22:27:26 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:26.156 22:27:26 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:26.156 22:27:26 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:26.156 22:27:26 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:26.156 22:27:26 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:26.156 00:07:26.156 real 0m0.261s 00:07:26.156 user 0m0.171s 00:07:26.156 sys 0m0.130s 00:07:26.156 ************************************ 00:07:26.156 END TEST version 00:07:26.156 ************************************ 00:07:26.156 22:27:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:26.156 22:27:26 -- common/autotest_common.sh@10 -- # set +x 00:07:26.156 22:27:26 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:26.156 22:27:26 -- spdk/autotest.sh@191 -- # uname -s 00:07:26.156 22:27:26 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:26.156 22:27:26 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:26.156 22:27:26 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:26.156 22:27:26 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:26.156 22:27:26 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:26.156 22:27:26 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:26.156 22:27:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:26.156 22:27:26 -- common/autotest_common.sh@10 -- # set +x 00:07:26.156 22:27:26 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:26.156 22:27:26 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:26.156 22:27:26 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:07:26.156 22:27:26 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:07:26.156 22:27:26 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:07:26.156 22:27:26 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:07:26.156 22:27:26 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:26.156 22:27:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:26.156 22:27:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.156 22:27:26 -- common/autotest_common.sh@10 -- # set +x 00:07:26.156 ************************************ 00:07:26.156 START TEST nvmf_tcp 00:07:26.156 ************************************ 00:07:26.156 22:27:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:26.416 * Looking for test storage... 00:07:26.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:26.416 22:27:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:26.416 22:27:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:26.416 22:27:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:26.416 22:27:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:26.416 22:27:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:26.416 22:27:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:26.416 22:27:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:26.416 22:27:27 -- scripts/common.sh@335 -- # IFS=.-: 00:07:26.416 22:27:27 -- scripts/common.sh@335 -- # read -ra ver1 00:07:26.416 22:27:27 -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.416 22:27:27 -- scripts/common.sh@336 -- # read -ra ver2 00:07:26.416 22:27:27 -- scripts/common.sh@337 -- # local 'op=<' 00:07:26.416 22:27:27 -- scripts/common.sh@339 -- # ver1_l=2 00:07:26.416 22:27:27 -- scripts/common.sh@340 -- # ver2_l=1 00:07:26.416 22:27:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:26.416 22:27:27 -- scripts/common.sh@343 -- # case "$op" in 00:07:26.416 22:27:27 -- scripts/common.sh@344 -- # : 1 00:07:26.416 22:27:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:26.416 22:27:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.416 22:27:27 -- scripts/common.sh@364 -- # decimal 1 00:07:26.416 22:27:27 -- scripts/common.sh@352 -- # local d=1 00:07:26.416 22:27:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.416 22:27:27 -- scripts/common.sh@354 -- # echo 1 00:07:26.416 22:27:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:26.416 22:27:27 -- scripts/common.sh@365 -- # decimal 2 00:07:26.416 22:27:27 -- scripts/common.sh@352 -- # local d=2 00:07:26.416 22:27:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.416 22:27:27 -- scripts/common.sh@354 -- # echo 2 00:07:26.416 22:27:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:26.416 22:27:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:26.416 22:27:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:26.416 22:27:27 -- scripts/common.sh@367 -- # return 0 00:07:26.416 22:27:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.416 22:27:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:26.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.416 --rc genhtml_branch_coverage=1 00:07:26.416 --rc genhtml_function_coverage=1 00:07:26.416 --rc genhtml_legend=1 00:07:26.416 --rc geninfo_all_blocks=1 00:07:26.416 --rc geninfo_unexecuted_blocks=1 00:07:26.416 00:07:26.416 ' 00:07:26.416 22:27:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:26.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.416 --rc genhtml_branch_coverage=1 00:07:26.416 --rc genhtml_function_coverage=1 00:07:26.416 --rc genhtml_legend=1 00:07:26.416 --rc geninfo_all_blocks=1 00:07:26.417 --rc geninfo_unexecuted_blocks=1 00:07:26.417 00:07:26.417 ' 00:07:26.417 22:27:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:26.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.417 --rc genhtml_branch_coverage=1 00:07:26.417 --rc genhtml_function_coverage=1 00:07:26.417 --rc genhtml_legend=1 00:07:26.417 --rc geninfo_all_blocks=1 00:07:26.417 --rc geninfo_unexecuted_blocks=1 00:07:26.417 00:07:26.417 ' 00:07:26.417 22:27:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:26.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.417 --rc genhtml_branch_coverage=1 00:07:26.417 --rc genhtml_function_coverage=1 00:07:26.417 --rc genhtml_legend=1 00:07:26.417 --rc geninfo_all_blocks=1 00:07:26.417 --rc geninfo_unexecuted_blocks=1 00:07:26.417 00:07:26.417 ' 00:07:26.417 22:27:27 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:26.417 22:27:27 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:26.417 22:27:27 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:26.417 22:27:27 -- nvmf/common.sh@7 -- # uname -s 00:07:26.417 22:27:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.417 22:27:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.417 22:27:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.417 22:27:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.417 22:27:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.417 22:27:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.417 22:27:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.417 22:27:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.417 22:27:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.417 22:27:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.417 22:27:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:07:26.417 22:27:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:07:26.417 22:27:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.417 22:27:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.417 22:27:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:26.417 22:27:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:26.417 22:27:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.417 22:27:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.417 22:27:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.417 22:27:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.417 22:27:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.417 22:27:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.417 22:27:27 -- paths/export.sh@5 -- # export PATH 00:07:26.417 22:27:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.417 22:27:27 -- nvmf/common.sh@46 -- # : 0 00:07:26.417 22:27:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:26.417 22:27:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:26.417 22:27:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:26.417 22:27:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.417 22:27:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.417 22:27:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:26.417 22:27:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:26.417 22:27:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:26.417 22:27:27 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:26.417 22:27:27 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:26.417 22:27:27 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:26.417 22:27:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:26.417 22:27:27 -- common/autotest_common.sh@10 -- # set +x 00:07:26.417 22:27:27 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:26.417 22:27:27 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:26.417 22:27:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:26.417 22:27:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.417 22:27:27 -- common/autotest_common.sh@10 -- # set +x 00:07:26.417 ************************************ 00:07:26.417 START TEST nvmf_example 00:07:26.417 ************************************ 00:07:26.417 22:27:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:26.677 * Looking for test storage... 00:07:26.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:26.677 22:27:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:26.677 22:27:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:26.677 22:27:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:26.677 22:27:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:26.677 22:27:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:26.677 22:27:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:26.677 22:27:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:26.677 22:27:27 -- scripts/common.sh@335 -- # IFS=.-: 00:07:26.677 22:27:27 -- scripts/common.sh@335 -- # read -ra ver1 00:07:26.677 22:27:27 -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.677 22:27:27 -- scripts/common.sh@336 -- # read -ra ver2 00:07:26.677 22:27:27 -- scripts/common.sh@337 -- # local 'op=<' 00:07:26.677 22:27:27 -- scripts/common.sh@339 -- # ver1_l=2 00:07:26.677 22:27:27 -- scripts/common.sh@340 -- # ver2_l=1 00:07:26.677 22:27:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:26.677 22:27:27 -- scripts/common.sh@343 -- # case "$op" in 00:07:26.677 22:27:27 -- scripts/common.sh@344 -- # : 1 00:07:26.677 22:27:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:26.677 22:27:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.677 22:27:27 -- scripts/common.sh@364 -- # decimal 1 00:07:26.677 22:27:27 -- scripts/common.sh@352 -- # local d=1 00:07:26.677 22:27:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.677 22:27:27 -- scripts/common.sh@354 -- # echo 1 00:07:26.677 22:27:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:26.677 22:27:27 -- scripts/common.sh@365 -- # decimal 2 00:07:26.677 22:27:27 -- scripts/common.sh@352 -- # local d=2 00:07:26.677 22:27:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.677 22:27:27 -- scripts/common.sh@354 -- # echo 2 00:07:26.677 22:27:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:26.677 22:27:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:26.677 22:27:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:26.677 22:27:27 -- scripts/common.sh@367 -- # return 0 00:07:26.677 22:27:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.677 22:27:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:26.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.677 --rc genhtml_branch_coverage=1 00:07:26.677 --rc genhtml_function_coverage=1 00:07:26.677 --rc genhtml_legend=1 00:07:26.677 --rc geninfo_all_blocks=1 00:07:26.677 --rc geninfo_unexecuted_blocks=1 00:07:26.677 00:07:26.677 ' 00:07:26.677 22:27:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:26.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.677 --rc genhtml_branch_coverage=1 00:07:26.677 --rc genhtml_function_coverage=1 00:07:26.677 --rc genhtml_legend=1 00:07:26.677 --rc geninfo_all_blocks=1 00:07:26.677 --rc geninfo_unexecuted_blocks=1 00:07:26.677 00:07:26.677 ' 00:07:26.677 22:27:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:26.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.677 --rc genhtml_branch_coverage=1 00:07:26.677 --rc genhtml_function_coverage=1 00:07:26.677 --rc genhtml_legend=1 00:07:26.677 --rc geninfo_all_blocks=1 00:07:26.677 --rc geninfo_unexecuted_blocks=1 00:07:26.677 00:07:26.677 ' 00:07:26.677 22:27:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:26.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.677 --rc genhtml_branch_coverage=1 00:07:26.677 --rc genhtml_function_coverage=1 00:07:26.677 --rc genhtml_legend=1 00:07:26.677 --rc geninfo_all_blocks=1 00:07:26.677 --rc geninfo_unexecuted_blocks=1 00:07:26.677 00:07:26.677 ' 00:07:26.677 22:27:27 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:26.677 22:27:27 -- nvmf/common.sh@7 -- # uname -s 00:07:26.677 22:27:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.677 22:27:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.677 22:27:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.677 22:27:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.677 22:27:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.677 22:27:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.677 22:27:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.677 22:27:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.677 22:27:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.677 22:27:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.677 22:27:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:07:26.677 22:27:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:07:26.677 22:27:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.677 22:27:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.677 22:27:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:26.677 22:27:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:26.677 22:27:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.677 22:27:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.677 22:27:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.677 22:27:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.677 22:27:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.677 22:27:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.677 22:27:27 -- paths/export.sh@5 -- # export PATH 00:07:26.677 22:27:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.677 22:27:27 -- nvmf/common.sh@46 -- # : 0 00:07:26.677 22:27:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:26.677 22:27:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:26.677 22:27:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:26.677 22:27:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.677 22:27:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.677 22:27:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:26.677 22:27:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:26.677 22:27:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:26.677 22:27:27 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:26.677 22:27:27 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:26.677 22:27:27 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:26.677 22:27:27 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:26.677 22:27:27 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:26.678 22:27:27 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:26.678 22:27:27 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:26.678 22:27:27 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:26.678 22:27:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:26.678 22:27:27 -- common/autotest_common.sh@10 -- # set +x 00:07:26.678 22:27:27 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:26.678 22:27:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:26.678 22:27:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.678 22:27:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:26.678 22:27:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:26.678 22:27:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:26.678 22:27:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.678 22:27:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:26.678 22:27:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.678 22:27:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:26.678 22:27:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:26.678 22:27:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:26.678 22:27:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:26.678 22:27:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:26.678 22:27:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:26.678 22:27:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:26.678 22:27:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:26.678 22:27:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:26.678 22:27:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:26.678 22:27:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:26.678 22:27:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:26.678 22:27:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:26.678 22:27:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.678 22:27:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:26.678 22:27:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:26.678 22:27:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:26.678 22:27:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:26.678 22:27:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:26.678 Cannot find device "nvmf_init_br" 00:07:26.678 22:27:27 -- nvmf/common.sh@153 -- # true 00:07:26.678 22:27:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:26.678 Cannot find device "nvmf_tgt_br" 00:07:26.678 22:27:27 -- nvmf/common.sh@154 -- # true 00:07:26.678 22:27:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:26.678 Cannot find device "nvmf_tgt_br2" 00:07:26.678 22:27:27 -- nvmf/common.sh@155 -- # true 00:07:26.678 22:27:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:26.678 Cannot find device "nvmf_init_br" 00:07:26.678 22:27:27 -- nvmf/common.sh@156 -- # true 00:07:26.678 22:27:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:26.678 Cannot find device "nvmf_tgt_br" 00:07:26.937 22:27:27 -- nvmf/common.sh@157 -- # true 00:07:26.937 22:27:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:26.937 Cannot find device "nvmf_tgt_br2" 00:07:26.937 22:27:27 -- nvmf/common.sh@158 -- # true 00:07:26.937 22:27:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:26.937 Cannot find device "nvmf_br" 00:07:26.937 22:27:27 -- nvmf/common.sh@159 -- # true 00:07:26.937 22:27:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:26.937 Cannot find device "nvmf_init_if" 00:07:26.937 22:27:27 -- nvmf/common.sh@160 -- # true 00:07:26.937 22:27:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:26.937 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:26.937 22:27:27 -- nvmf/common.sh@161 -- # true 00:07:26.937 22:27:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:26.937 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:26.937 22:27:27 -- nvmf/common.sh@162 -- # true 00:07:26.937 22:27:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:26.937 22:27:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:26.937 22:27:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:26.937 22:27:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:26.937 22:27:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:26.937 22:27:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:26.937 22:27:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:26.937 22:27:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:26.937 22:27:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:26.937 22:27:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:26.937 22:27:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:26.937 22:27:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:26.937 22:27:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:26.937 22:27:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:26.937 22:27:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:26.937 22:27:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:26.937 22:27:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:26.937 22:27:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:26.937 22:27:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:26.937 22:27:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:26.938 22:27:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:26.938 22:27:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:27.197 22:27:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:27.197 22:27:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:27.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:27.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:07:27.197 00:07:27.197 --- 10.0.0.2 ping statistics --- 00:07:27.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.197 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:07:27.197 22:27:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:27.197 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:27.197 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:07:27.197 00:07:27.197 --- 10.0.0.3 ping statistics --- 00:07:27.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.197 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:07:27.197 22:27:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:27.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:27.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:07:27.197 00:07:27.197 --- 10.0.0.1 ping statistics --- 00:07:27.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.197 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:07:27.197 22:27:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:27.197 22:27:27 -- nvmf/common.sh@421 -- # return 0 00:07:27.197 22:27:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:27.197 22:27:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:27.197 22:27:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:27.197 22:27:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:27.197 22:27:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:27.197 22:27:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:27.197 22:27:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:27.197 22:27:27 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:27.197 22:27:27 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:27.197 22:27:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:27.197 22:27:27 -- common/autotest_common.sh@10 -- # set +x 00:07:27.197 22:27:27 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:27.197 22:27:27 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:27.197 22:27:27 -- target/nvmf_example.sh@34 -- # nvmfpid=71850 00:07:27.197 22:27:27 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:27.197 22:27:27 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:27.197 22:27:27 -- target/nvmf_example.sh@36 -- # waitforlisten 71850 00:07:27.197 22:27:27 -- common/autotest_common.sh@829 -- # '[' -z 71850 ']' 00:07:27.197 22:27:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.197 22:27:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:27.197 22:27:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.198 22:27:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:27.198 22:27:27 -- common/autotest_common.sh@10 -- # set +x 00:07:28.134 22:27:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.134 22:27:28 -- common/autotest_common.sh@862 -- # return 0 00:07:28.134 22:27:28 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:28.134 22:27:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:28.134 22:27:28 -- common/autotest_common.sh@10 -- # set +x 00:07:28.393 22:27:28 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:28.393 22:27:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.393 22:27:28 -- common/autotest_common.sh@10 -- # set +x 00:07:28.393 22:27:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.393 22:27:28 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:28.393 22:27:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.393 22:27:28 -- common/autotest_common.sh@10 -- # set +x 00:07:28.393 22:27:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.393 22:27:28 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:28.393 22:27:28 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:28.393 22:27:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.393 22:27:28 -- common/autotest_common.sh@10 -- # set +x 00:07:28.393 22:27:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.393 22:27:28 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:28.393 22:27:28 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:28.393 22:27:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.393 22:27:28 -- common/autotest_common.sh@10 -- # set +x 00:07:28.393 22:27:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.393 22:27:28 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:28.393 22:27:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.393 22:27:28 -- common/autotest_common.sh@10 -- # set +x 00:07:28.393 22:27:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.393 22:27:28 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:07:28.393 22:27:28 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:40.624 Initializing NVMe Controllers 00:07:40.624 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:40.624 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:40.624 Initialization complete. Launching workers. 00:07:40.624 ======================================================== 00:07:40.624 Latency(us) 00:07:40.624 Device Information : IOPS MiB/s Average min max 00:07:40.624 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17090.00 66.76 3746.72 598.63 20866.87 00:07:40.624 ======================================================== 00:07:40.624 Total : 17090.00 66.76 3746.72 598.63 20866.87 00:07:40.624 00:07:40.624 22:27:39 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:40.624 22:27:39 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:40.624 22:27:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:40.624 22:27:39 -- nvmf/common.sh@116 -- # sync 00:07:40.624 22:27:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:40.624 22:27:39 -- nvmf/common.sh@119 -- # set +e 00:07:40.624 22:27:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:40.624 22:27:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:40.624 rmmod nvme_tcp 00:07:40.624 rmmod nvme_fabrics 00:07:40.624 rmmod nvme_keyring 00:07:40.624 22:27:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:40.624 22:27:39 -- nvmf/common.sh@123 -- # set -e 00:07:40.624 22:27:39 -- nvmf/common.sh@124 -- # return 0 00:07:40.624 22:27:39 -- nvmf/common.sh@477 -- # '[' -n 71850 ']' 00:07:40.624 22:27:39 -- nvmf/common.sh@478 -- # killprocess 71850 00:07:40.624 22:27:39 -- common/autotest_common.sh@936 -- # '[' -z 71850 ']' 00:07:40.624 22:27:39 -- common/autotest_common.sh@940 -- # kill -0 71850 00:07:40.624 22:27:39 -- common/autotest_common.sh@941 -- # uname 00:07:40.624 22:27:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:40.624 22:27:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71850 00:07:40.624 22:27:39 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:07:40.624 killing process with pid 71850 00:07:40.624 22:27:39 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:07:40.624 22:27:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71850' 00:07:40.624 22:27:39 -- common/autotest_common.sh@955 -- # kill 71850 00:07:40.624 22:27:39 -- common/autotest_common.sh@960 -- # wait 71850 00:07:40.624 nvmf threads initialize successfully 00:07:40.624 bdev subsystem init successfully 00:07:40.624 created a nvmf target service 00:07:40.624 create targets's poll groups done 00:07:40.624 all subsystems of target started 00:07:40.624 nvmf target is running 00:07:40.624 all subsystems of target stopped 00:07:40.624 destroy targets's poll groups done 00:07:40.624 destroyed the nvmf target service 00:07:40.624 bdev subsystem finish successfully 00:07:40.624 nvmf threads destroy successfully 00:07:40.624 22:27:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:40.624 22:27:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:40.624 22:27:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:40.624 22:27:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:40.624 22:27:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:40.624 22:27:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.624 22:27:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:40.624 22:27:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.624 22:27:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:40.624 22:27:39 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:40.624 22:27:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:40.624 22:27:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.624 00:07:40.624 real 0m12.609s 00:07:40.624 user 0m45.173s 00:07:40.624 sys 0m2.125s 00:07:40.624 22:27:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.624 ************************************ 00:07:40.624 END TEST nvmf_example 00:07:40.624 22:27:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.624 ************************************ 00:07:40.624 22:27:39 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:40.624 22:27:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:40.624 22:27:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.624 22:27:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.624 ************************************ 00:07:40.624 START TEST nvmf_filesystem 00:07:40.624 ************************************ 00:07:40.624 22:27:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:40.624 * Looking for test storage... 00:07:40.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:40.624 22:27:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:40.624 22:27:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:40.624 22:27:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:40.624 22:27:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:40.624 22:27:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:40.624 22:27:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:40.624 22:27:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:40.624 22:27:39 -- scripts/common.sh@335 -- # IFS=.-: 00:07:40.624 22:27:39 -- scripts/common.sh@335 -- # read -ra ver1 00:07:40.624 22:27:39 -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.624 22:27:39 -- scripts/common.sh@336 -- # read -ra ver2 00:07:40.624 22:27:39 -- scripts/common.sh@337 -- # local 'op=<' 00:07:40.624 22:27:39 -- scripts/common.sh@339 -- # ver1_l=2 00:07:40.625 22:27:39 -- scripts/common.sh@340 -- # ver2_l=1 00:07:40.625 22:27:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:40.625 22:27:39 -- scripts/common.sh@343 -- # case "$op" in 00:07:40.625 22:27:39 -- scripts/common.sh@344 -- # : 1 00:07:40.625 22:27:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:40.625 22:27:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.625 22:27:39 -- scripts/common.sh@364 -- # decimal 1 00:07:40.625 22:27:39 -- scripts/common.sh@352 -- # local d=1 00:07:40.625 22:27:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.625 22:27:39 -- scripts/common.sh@354 -- # echo 1 00:07:40.625 22:27:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:40.625 22:27:39 -- scripts/common.sh@365 -- # decimal 2 00:07:40.625 22:27:39 -- scripts/common.sh@352 -- # local d=2 00:07:40.625 22:27:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.625 22:27:39 -- scripts/common.sh@354 -- # echo 2 00:07:40.625 22:27:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:40.625 22:27:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:40.625 22:27:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:40.625 22:27:39 -- scripts/common.sh@367 -- # return 0 00:07:40.625 22:27:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.625 22:27:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:40.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.625 --rc genhtml_branch_coverage=1 00:07:40.625 --rc genhtml_function_coverage=1 00:07:40.625 --rc genhtml_legend=1 00:07:40.625 --rc geninfo_all_blocks=1 00:07:40.625 --rc geninfo_unexecuted_blocks=1 00:07:40.625 00:07:40.625 ' 00:07:40.625 22:27:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:40.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.625 --rc genhtml_branch_coverage=1 00:07:40.625 --rc genhtml_function_coverage=1 00:07:40.625 --rc genhtml_legend=1 00:07:40.625 --rc geninfo_all_blocks=1 00:07:40.625 --rc geninfo_unexecuted_blocks=1 00:07:40.625 00:07:40.625 ' 00:07:40.625 22:27:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:40.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.625 --rc genhtml_branch_coverage=1 00:07:40.625 --rc genhtml_function_coverage=1 00:07:40.625 --rc genhtml_legend=1 00:07:40.625 --rc geninfo_all_blocks=1 00:07:40.625 --rc geninfo_unexecuted_blocks=1 00:07:40.625 00:07:40.625 ' 00:07:40.625 22:27:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:40.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.625 --rc genhtml_branch_coverage=1 00:07:40.625 --rc genhtml_function_coverage=1 00:07:40.625 --rc genhtml_legend=1 00:07:40.625 --rc geninfo_all_blocks=1 00:07:40.625 --rc geninfo_unexecuted_blocks=1 00:07:40.625 00:07:40.625 ' 00:07:40.625 22:27:39 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:07:40.625 22:27:39 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:40.625 22:27:39 -- common/autotest_common.sh@34 -- # set -e 00:07:40.625 22:27:39 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:40.625 22:27:39 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:40.625 22:27:39 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:40.625 22:27:39 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:40.625 22:27:39 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:40.625 22:27:39 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:40.625 22:27:39 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:40.625 22:27:39 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:40.625 22:27:39 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:40.625 22:27:39 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:40.625 22:27:39 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:40.625 22:27:39 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:40.625 22:27:39 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:40.625 22:27:39 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:40.625 22:27:39 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:40.625 22:27:39 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:40.625 22:27:39 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:40.625 22:27:39 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:40.625 22:27:39 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:40.625 22:27:39 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:40.625 22:27:39 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:40.625 22:27:39 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:40.625 22:27:39 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:40.625 22:27:39 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:40.625 22:27:39 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:40.625 22:27:39 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:40.625 22:27:39 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:40.625 22:27:39 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:40.625 22:27:39 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:40.625 22:27:39 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:40.625 22:27:39 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:40.625 22:27:39 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:40.625 22:27:39 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:40.625 22:27:39 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:40.625 22:27:39 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:40.625 22:27:39 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:40.625 22:27:39 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:40.625 22:27:39 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:40.625 22:27:39 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:40.625 22:27:39 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:40.625 22:27:39 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:40.625 22:27:39 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:40.625 22:27:39 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:40.625 22:27:39 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:40.625 22:27:39 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:40.625 22:27:39 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:40.625 22:27:39 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:40.625 22:27:39 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:40.625 22:27:39 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:40.625 22:27:39 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:40.625 22:27:39 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:40.625 22:27:39 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:40.625 22:27:39 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:40.625 22:27:39 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:40.625 22:27:39 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:40.625 22:27:39 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:40.625 22:27:39 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:40.625 22:27:39 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:40.625 22:27:39 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:40.625 22:27:39 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:40.625 22:27:39 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:40.625 22:27:39 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:07:40.625 22:27:39 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:40.625 22:27:39 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:40.625 22:27:39 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:40.625 22:27:39 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:40.625 22:27:39 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:40.625 22:27:39 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:40.625 22:27:39 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:40.625 22:27:39 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:40.625 22:27:39 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:40.625 22:27:39 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:07:40.625 22:27:39 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:40.625 22:27:39 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:40.625 22:27:39 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:40.625 22:27:39 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:40.625 22:27:39 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:40.625 22:27:39 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:40.625 22:27:39 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:40.625 22:27:39 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:40.625 22:27:39 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:40.625 22:27:39 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:40.625 22:27:39 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:40.625 22:27:39 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:40.625 22:27:39 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:40.625 22:27:39 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:07:40.625 22:27:39 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:07:40.625 22:27:39 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:07:40.625 22:27:39 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:07:40.625 22:27:39 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:07:40.625 22:27:39 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:07:40.625 22:27:39 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:40.625 22:27:39 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:40.625 22:27:39 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:40.625 22:27:39 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:40.625 22:27:39 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:40.625 22:27:39 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:40.625 22:27:39 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:07:40.625 22:27:39 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:40.625 #define SPDK_CONFIG_H 00:07:40.625 #define SPDK_CONFIG_APPS 1 00:07:40.625 #define SPDK_CONFIG_ARCH native 00:07:40.625 #undef SPDK_CONFIG_ASAN 00:07:40.625 #define SPDK_CONFIG_AVAHI 1 00:07:40.625 #undef SPDK_CONFIG_CET 00:07:40.626 #define SPDK_CONFIG_COVERAGE 1 00:07:40.626 #define SPDK_CONFIG_CROSS_PREFIX 00:07:40.626 #undef SPDK_CONFIG_CRYPTO 00:07:40.626 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:40.626 #undef SPDK_CONFIG_CUSTOMOCF 00:07:40.626 #undef SPDK_CONFIG_DAOS 00:07:40.626 #define SPDK_CONFIG_DAOS_DIR 00:07:40.626 #define SPDK_CONFIG_DEBUG 1 00:07:40.626 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:40.626 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:07:40.626 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:07:40.626 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:07:40.626 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:40.626 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:40.626 #define SPDK_CONFIG_EXAMPLES 1 00:07:40.626 #undef SPDK_CONFIG_FC 00:07:40.626 #define SPDK_CONFIG_FC_PATH 00:07:40.626 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:40.626 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:40.626 #undef SPDK_CONFIG_FUSE 00:07:40.626 #undef SPDK_CONFIG_FUZZER 00:07:40.626 #define SPDK_CONFIG_FUZZER_LIB 00:07:40.626 #define SPDK_CONFIG_GOLANG 1 00:07:40.626 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:40.626 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:40.626 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:40.626 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:40.626 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:40.626 #define SPDK_CONFIG_IDXD 1 00:07:40.626 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:40.626 #undef SPDK_CONFIG_IPSEC_MB 00:07:40.626 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:40.626 #define SPDK_CONFIG_ISAL 1 00:07:40.626 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:40.626 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:40.626 #define SPDK_CONFIG_LIBDIR 00:07:40.626 #undef SPDK_CONFIG_LTO 00:07:40.626 #define SPDK_CONFIG_MAX_LCORES 00:07:40.626 #define SPDK_CONFIG_NVME_CUSE 1 00:07:40.626 #undef SPDK_CONFIG_OCF 00:07:40.626 #define SPDK_CONFIG_OCF_PATH 00:07:40.626 #define SPDK_CONFIG_OPENSSL_PATH 00:07:40.626 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:40.626 #undef SPDK_CONFIG_PGO_USE 00:07:40.626 #define SPDK_CONFIG_PREFIX /usr/local 00:07:40.626 #undef SPDK_CONFIG_RAID5F 00:07:40.626 #undef SPDK_CONFIG_RBD 00:07:40.626 #define SPDK_CONFIG_RDMA 1 00:07:40.626 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:40.626 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:40.626 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:40.626 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:40.626 #define SPDK_CONFIG_SHARED 1 00:07:40.626 #undef SPDK_CONFIG_SMA 00:07:40.626 #define SPDK_CONFIG_TESTS 1 00:07:40.626 #undef SPDK_CONFIG_TSAN 00:07:40.626 #define SPDK_CONFIG_UBLK 1 00:07:40.626 #define SPDK_CONFIG_UBSAN 1 00:07:40.626 #undef SPDK_CONFIG_UNIT_TESTS 00:07:40.626 #undef SPDK_CONFIG_URING 00:07:40.626 #define SPDK_CONFIG_URING_PATH 00:07:40.626 #undef SPDK_CONFIG_URING_ZNS 00:07:40.626 #define SPDK_CONFIG_USDT 1 00:07:40.626 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:40.626 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:40.626 #undef SPDK_CONFIG_VFIO_USER 00:07:40.626 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:40.626 #define SPDK_CONFIG_VHOST 1 00:07:40.626 #define SPDK_CONFIG_VIRTIO 1 00:07:40.626 #undef SPDK_CONFIG_VTUNE 00:07:40.626 #define SPDK_CONFIG_VTUNE_DIR 00:07:40.626 #define SPDK_CONFIG_WERROR 1 00:07:40.626 #define SPDK_CONFIG_WPDK_DIR 00:07:40.626 #undef SPDK_CONFIG_XNVME 00:07:40.626 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:40.626 22:27:39 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:40.626 22:27:39 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:40.626 22:27:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.626 22:27:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.626 22:27:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.626 22:27:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.626 22:27:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.626 22:27:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.626 22:27:39 -- paths/export.sh@5 -- # export PATH 00:07:40.626 22:27:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.626 22:27:39 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:40.626 22:27:39 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:40.626 22:27:39 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:40.626 22:27:39 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:40.626 22:27:39 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:07:40.626 22:27:39 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:07:40.626 22:27:39 -- pm/common@16 -- # TEST_TAG=N/A 00:07:40.626 22:27:39 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:07:40.626 22:27:39 -- common/autotest_common.sh@52 -- # : 1 00:07:40.626 22:27:39 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:40.626 22:27:39 -- common/autotest_common.sh@56 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:40.626 22:27:39 -- common/autotest_common.sh@58 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:40.626 22:27:39 -- common/autotest_common.sh@60 -- # : 1 00:07:40.626 22:27:39 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:40.626 22:27:39 -- common/autotest_common.sh@62 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:40.626 22:27:39 -- common/autotest_common.sh@64 -- # : 00:07:40.626 22:27:39 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:40.626 22:27:39 -- common/autotest_common.sh@66 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:40.626 22:27:39 -- common/autotest_common.sh@68 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:40.626 22:27:39 -- common/autotest_common.sh@70 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:40.626 22:27:39 -- common/autotest_common.sh@72 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:40.626 22:27:39 -- common/autotest_common.sh@74 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:40.626 22:27:39 -- common/autotest_common.sh@76 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:40.626 22:27:39 -- common/autotest_common.sh@78 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:40.626 22:27:39 -- common/autotest_common.sh@80 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:40.626 22:27:39 -- common/autotest_common.sh@82 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:40.626 22:27:39 -- common/autotest_common.sh@84 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:40.626 22:27:39 -- common/autotest_common.sh@86 -- # : 1 00:07:40.626 22:27:39 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:40.626 22:27:39 -- common/autotest_common.sh@88 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:40.626 22:27:39 -- common/autotest_common.sh@90 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:40.626 22:27:39 -- common/autotest_common.sh@92 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:40.626 22:27:39 -- common/autotest_common.sh@94 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:40.626 22:27:39 -- common/autotest_common.sh@96 -- # : tcp 00:07:40.626 22:27:39 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:40.626 22:27:39 -- common/autotest_common.sh@98 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:40.626 22:27:39 -- common/autotest_common.sh@100 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:40.626 22:27:39 -- common/autotest_common.sh@102 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:40.626 22:27:39 -- common/autotest_common.sh@104 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:40.626 22:27:39 -- common/autotest_common.sh@106 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:40.626 22:27:39 -- common/autotest_common.sh@108 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:40.626 22:27:39 -- common/autotest_common.sh@110 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:40.626 22:27:39 -- common/autotest_common.sh@112 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:40.626 22:27:39 -- common/autotest_common.sh@114 -- # : 0 00:07:40.626 22:27:39 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:40.626 22:27:39 -- common/autotest_common.sh@116 -- # : 1 00:07:40.626 22:27:39 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:40.627 22:27:39 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:07:40.627 22:27:39 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:40.627 22:27:39 -- common/autotest_common.sh@120 -- # : 0 00:07:40.627 22:27:39 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:40.627 22:27:39 -- common/autotest_common.sh@122 -- # : 0 00:07:40.627 22:27:39 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:40.627 22:27:39 -- common/autotest_common.sh@124 -- # : 0 00:07:40.627 22:27:39 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:40.627 22:27:40 -- common/autotest_common.sh@126 -- # : 0 00:07:40.627 22:27:40 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:40.627 22:27:40 -- common/autotest_common.sh@128 -- # : 0 00:07:40.627 22:27:40 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:40.627 22:27:40 -- common/autotest_common.sh@130 -- # : 0 00:07:40.627 22:27:40 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:40.627 22:27:40 -- common/autotest_common.sh@132 -- # : v22.11.4 00:07:40.627 22:27:40 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:40.627 22:27:40 -- common/autotest_common.sh@134 -- # : true 00:07:40.627 22:27:40 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:40.627 22:27:40 -- common/autotest_common.sh@136 -- # : 0 00:07:40.627 22:27:40 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:40.627 22:27:40 -- common/autotest_common.sh@138 -- # : 0 00:07:40.627 22:27:40 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:40.627 22:27:40 -- common/autotest_common.sh@140 -- # : 1 00:07:40.627 22:27:40 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:40.627 22:27:40 -- common/autotest_common.sh@142 -- # : 0 00:07:40.627 22:27:40 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:40.627 22:27:40 -- common/autotest_common.sh@144 -- # : 0 00:07:40.627 22:27:40 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:40.627 22:27:40 -- common/autotest_common.sh@146 -- # : 0 00:07:40.627 22:27:40 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:40.627 22:27:40 -- common/autotest_common.sh@148 -- # : 00:07:40.627 22:27:40 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:40.627 22:27:40 -- common/autotest_common.sh@150 -- # : 0 00:07:40.627 22:27:40 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:40.627 22:27:40 -- common/autotest_common.sh@152 -- # : 0 00:07:40.627 22:27:40 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:40.627 22:27:40 -- common/autotest_common.sh@154 -- # : 0 00:07:40.627 22:27:40 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:40.627 22:27:40 -- common/autotest_common.sh@156 -- # : 0 00:07:40.627 22:27:40 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:40.627 22:27:40 -- common/autotest_common.sh@158 -- # : 0 00:07:40.627 22:27:40 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:40.627 22:27:40 -- common/autotest_common.sh@160 -- # : 0 00:07:40.627 22:27:40 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:40.627 22:27:40 -- common/autotest_common.sh@163 -- # : 00:07:40.627 22:27:40 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:40.627 22:27:40 -- common/autotest_common.sh@165 -- # : 1 00:07:40.627 22:27:40 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:40.627 22:27:40 -- common/autotest_common.sh@167 -- # : 1 00:07:40.627 22:27:40 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:40.627 22:27:40 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:40.627 22:27:40 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:40.627 22:27:40 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:40.627 22:27:40 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:40.627 22:27:40 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:40.627 22:27:40 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:40.627 22:27:40 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:40.627 22:27:40 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:40.627 22:27:40 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:40.627 22:27:40 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:40.627 22:27:40 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:40.627 22:27:40 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:40.627 22:27:40 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:40.627 22:27:40 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:40.627 22:27:40 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:40.627 22:27:40 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:40.627 22:27:40 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:40.627 22:27:40 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:40.627 22:27:40 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:40.627 22:27:40 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:40.627 22:27:40 -- common/autotest_common.sh@196 -- # cat 00:07:40.627 22:27:40 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:40.627 22:27:40 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:40.627 22:27:40 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:40.627 22:27:40 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:40.627 22:27:40 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:40.627 22:27:40 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:40.627 22:27:40 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:40.627 22:27:40 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:40.627 22:27:40 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:40.627 22:27:40 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:40.627 22:27:40 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:40.627 22:27:40 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:40.627 22:27:40 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:40.627 22:27:40 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:40.627 22:27:40 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:40.627 22:27:40 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:40.627 22:27:40 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:40.627 22:27:40 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:40.627 22:27:40 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:40.627 22:27:40 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:07:40.627 22:27:40 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:07:40.627 22:27:40 -- common/autotest_common.sh@249 -- # _LCOV= 00:07:40.627 22:27:40 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:07:40.627 22:27:40 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:07:40.627 22:27:40 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:40.627 22:27:40 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:07:40.627 22:27:40 -- common/autotest_common.sh@255 -- # lcov_opt= 00:07:40.627 22:27:40 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:07:40.627 22:27:40 -- common/autotest_common.sh@259 -- # export valgrind= 00:07:40.627 22:27:40 -- common/autotest_common.sh@259 -- # valgrind= 00:07:40.627 22:27:40 -- common/autotest_common.sh@265 -- # uname -s 00:07:40.627 22:27:40 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:07:40.627 22:27:40 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:07:40.627 22:27:40 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:07:40.627 22:27:40 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:07:40.627 22:27:40 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:40.627 22:27:40 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:40.627 22:27:40 -- common/autotest_common.sh@275 -- # MAKE=make 00:07:40.627 22:27:40 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:07:40.627 22:27:40 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:07:40.627 22:27:40 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:07:40.627 22:27:40 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:07:40.627 22:27:40 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:07:40.627 22:27:40 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:07:40.627 22:27:40 -- common/autotest_common.sh@301 -- # for i in "$@" 00:07:40.627 22:27:40 -- common/autotest_common.sh@302 -- # case "$i" in 00:07:40.627 22:27:40 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=tcp 00:07:40.627 22:27:40 -- common/autotest_common.sh@319 -- # [[ -z 72094 ]] 00:07:40.627 22:27:40 -- common/autotest_common.sh@319 -- # kill -0 72094 00:07:40.627 22:27:40 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:07:40.627 22:27:40 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:07:40.627 22:27:40 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:07:40.628 22:27:40 -- common/autotest_common.sh@332 -- # local mount target_dir 00:07:40.628 22:27:40 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:07:40.628 22:27:40 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:07:40.628 22:27:40 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:07:40.628 22:27:40 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:07:40.628 22:27:40 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.wSdBxg 00:07:40.628 22:27:40 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:40.628 22:27:40 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:07:40.628 22:27:40 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:07:40.628 22:27:40 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.wSdBxg/tests/target /tmp/spdk.wSdBxg 00:07:40.628 22:27:40 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:07:40.628 22:27:40 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:40.628 22:27:40 -- common/autotest_common.sh@328 -- # df -T 00:07:40.628 22:27:40 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:07:40.628 22:27:40 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:07:40.628 22:27:40 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:07:40.628 22:27:40 -- common/autotest_common.sh@363 -- # avails["$mount"]=13431672832 00:07:40.628 22:27:40 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:07:40.628 22:27:40 -- common/autotest_common.sh@364 -- # uses["$mount"]=6149910528 00:07:40.628 22:27:40 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:40.628 22:27:40 -- common/autotest_common.sh@362 -- # mounts["$mount"]=devtmpfs 00:07:40.628 22:27:40 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:07:40.628 22:27:40 -- common/autotest_common.sh@363 -- # avails["$mount"]=4194304 00:07:40.628 22:27:40 -- common/autotest_common.sh@363 -- # sizes["$mount"]=4194304 00:07:40.628 22:27:40 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:07:40.628 22:27:40 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:40.628 22:27:40 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:40.628 22:27:40 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:40.628 22:27:40 -- common/autotest_common.sh@363 -- # avails["$mount"]=6265167872 00:07:40.628 22:27:40 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266425344 00:07:40.628 22:27:40 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:07:40.628 22:27:40 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:40.628 22:27:40 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:40.628 22:27:40 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:40.628 22:27:40 -- common/autotest_common.sh@363 -- # avails["$mount"]=2493755392 00:07:40.628 22:27:40 -- common/autotest_common.sh@363 -- # sizes["$mount"]=2506571776 00:07:40.628 22:27:40 -- common/autotest_common.sh@364 -- # uses["$mount"]=12816384 00:07:40.628 22:27:40 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:40.628 22:27:40 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:07:40.628 22:27:40 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:07:40.628 22:27:40 -- common/autotest_common.sh@363 -- # avails["$mount"]=13431672832 00:07:40.628 22:27:40 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:07:40.628 22:27:40 -- common/autotest_common.sh@364 -- # uses["$mount"]=6149910528 00:07:40.628 22:27:40 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:40.628 22:27:40 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda2 00:07:40.628 22:27:40 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:07:40.628 22:27:40 -- common/autotest_common.sh@363 -- # avails["$mount"]=840085504 00:07:40.628 22:27:40 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1012768768 00:07:40.628 22:27:40 -- common/autotest_common.sh@364 -- # uses["$mount"]=103477248 00:07:40.628 22:27:40 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:40.628 22:27:40 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:40.628 22:27:40 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:40.628 22:27:40 -- common/autotest_common.sh@363 -- # avails["$mount"]=6266286080 00:07:40.628 22:27:40 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266425344 00:07:40.628 22:27:40 -- common/autotest_common.sh@364 -- # uses["$mount"]=139264 00:07:40.628 22:27:40 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:40.628 22:27:40 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda3 00:07:40.628 22:27:40 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:07:40.628 22:27:40 -- common/autotest_common.sh@363 -- # avails["$mount"]=91617280 00:07:40.628 22:27:40 -- common/autotest_common.sh@363 -- # sizes["$mount"]=104607744 00:07:40.628 22:27:40 -- common/autotest_common.sh@364 -- # uses["$mount"]=12990464 00:07:40.628 22:27:40 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:40.628 22:27:40 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:40.628 22:27:40 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:40.628 22:27:40 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253269504 00:07:40.628 22:27:40 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253281792 00:07:40.628 22:27:40 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:07:40.628 22:27:40 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:40.628 22:27:40 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:07:40.628 22:27:40 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:07:40.628 22:27:40 -- common/autotest_common.sh@363 -- # avails["$mount"]=98375073792 00:07:40.628 22:27:40 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:07:40.628 22:27:40 -- common/autotest_common.sh@364 -- # uses["$mount"]=1327706112 00:07:40.628 22:27:40 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:40.628 22:27:40 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:07:40.628 * Looking for test storage... 00:07:40.628 22:27:40 -- common/autotest_common.sh@369 -- # local target_space new_size 00:07:40.628 22:27:40 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:07:40.628 22:27:40 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:40.628 22:27:40 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:40.628 22:27:40 -- common/autotest_common.sh@373 -- # mount=/home 00:07:40.628 22:27:40 -- common/autotest_common.sh@375 -- # target_space=13431672832 00:07:40.628 22:27:40 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:07:40.628 22:27:40 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:07:40.628 22:27:40 -- common/autotest_common.sh@381 -- # [[ btrfs == tmpfs ]] 00:07:40.628 22:27:40 -- common/autotest_common.sh@381 -- # [[ btrfs == ramfs ]] 00:07:40.628 22:27:40 -- common/autotest_common.sh@381 -- # [[ /home == / ]] 00:07:40.628 22:27:40 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:40.628 22:27:40 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:40.628 22:27:40 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:40.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:40.628 22:27:40 -- common/autotest_common.sh@390 -- # return 0 00:07:40.628 22:27:40 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:07:40.628 22:27:40 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:07:40.628 22:27:40 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:40.628 22:27:40 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:40.628 22:27:40 -- common/autotest_common.sh@1682 -- # true 00:07:40.628 22:27:40 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:07:40.628 22:27:40 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:40.628 22:27:40 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:40.628 22:27:40 -- common/autotest_common.sh@27 -- # exec 00:07:40.628 22:27:40 -- common/autotest_common.sh@29 -- # exec 00:07:40.628 22:27:40 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:40.628 22:27:40 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:40.628 22:27:40 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:40.628 22:27:40 -- common/autotest_common.sh@18 -- # set -x 00:07:40.628 22:27:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:40.628 22:27:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:40.628 22:27:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:40.628 22:27:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:40.628 22:27:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:40.628 22:27:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:40.628 22:27:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:40.628 22:27:40 -- scripts/common.sh@335 -- # IFS=.-: 00:07:40.628 22:27:40 -- scripts/common.sh@335 -- # read -ra ver1 00:07:40.628 22:27:40 -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.628 22:27:40 -- scripts/common.sh@336 -- # read -ra ver2 00:07:40.628 22:27:40 -- scripts/common.sh@337 -- # local 'op=<' 00:07:40.628 22:27:40 -- scripts/common.sh@339 -- # ver1_l=2 00:07:40.628 22:27:40 -- scripts/common.sh@340 -- # ver2_l=1 00:07:40.628 22:27:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:40.628 22:27:40 -- scripts/common.sh@343 -- # case "$op" in 00:07:40.628 22:27:40 -- scripts/common.sh@344 -- # : 1 00:07:40.628 22:27:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:40.628 22:27:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.628 22:27:40 -- scripts/common.sh@364 -- # decimal 1 00:07:40.628 22:27:40 -- scripts/common.sh@352 -- # local d=1 00:07:40.628 22:27:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.628 22:27:40 -- scripts/common.sh@354 -- # echo 1 00:07:40.628 22:27:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:40.628 22:27:40 -- scripts/common.sh@365 -- # decimal 2 00:07:40.628 22:27:40 -- scripts/common.sh@352 -- # local d=2 00:07:40.628 22:27:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.628 22:27:40 -- scripts/common.sh@354 -- # echo 2 00:07:40.628 22:27:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:40.628 22:27:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:40.628 22:27:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:40.628 22:27:40 -- scripts/common.sh@367 -- # return 0 00:07:40.628 22:27:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.628 22:27:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:40.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.628 --rc genhtml_branch_coverage=1 00:07:40.628 --rc genhtml_function_coverage=1 00:07:40.628 --rc genhtml_legend=1 00:07:40.628 --rc geninfo_all_blocks=1 00:07:40.629 --rc geninfo_unexecuted_blocks=1 00:07:40.629 00:07:40.629 ' 00:07:40.629 22:27:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:40.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.629 --rc genhtml_branch_coverage=1 00:07:40.629 --rc genhtml_function_coverage=1 00:07:40.629 --rc genhtml_legend=1 00:07:40.629 --rc geninfo_all_blocks=1 00:07:40.629 --rc geninfo_unexecuted_blocks=1 00:07:40.629 00:07:40.629 ' 00:07:40.629 22:27:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:40.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.629 --rc genhtml_branch_coverage=1 00:07:40.629 --rc genhtml_function_coverage=1 00:07:40.629 --rc genhtml_legend=1 00:07:40.629 --rc geninfo_all_blocks=1 00:07:40.629 --rc geninfo_unexecuted_blocks=1 00:07:40.629 00:07:40.629 ' 00:07:40.629 22:27:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:40.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.629 --rc genhtml_branch_coverage=1 00:07:40.629 --rc genhtml_function_coverage=1 00:07:40.629 --rc genhtml_legend=1 00:07:40.629 --rc geninfo_all_blocks=1 00:07:40.629 --rc geninfo_unexecuted_blocks=1 00:07:40.629 00:07:40.629 ' 00:07:40.629 22:27:40 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:40.629 22:27:40 -- nvmf/common.sh@7 -- # uname -s 00:07:40.629 22:27:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.629 22:27:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.629 22:27:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.629 22:27:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.629 22:27:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.629 22:27:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.629 22:27:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.629 22:27:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.629 22:27:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.629 22:27:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.629 22:27:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:07:40.629 22:27:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:07:40.629 22:27:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.629 22:27:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.629 22:27:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:40.629 22:27:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:40.629 22:27:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.629 22:27:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.629 22:27:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.629 22:27:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.629 22:27:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.629 22:27:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.629 22:27:40 -- paths/export.sh@5 -- # export PATH 00:07:40.629 22:27:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.629 22:27:40 -- nvmf/common.sh@46 -- # : 0 00:07:40.629 22:27:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:40.629 22:27:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:40.629 22:27:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:40.629 22:27:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.629 22:27:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.629 22:27:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:40.629 22:27:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:40.629 22:27:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:40.629 22:27:40 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:40.629 22:27:40 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:40.629 22:27:40 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:40.629 22:27:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:40.629 22:27:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.629 22:27:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:40.629 22:27:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:40.629 22:27:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:40.629 22:27:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.629 22:27:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:40.629 22:27:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.629 22:27:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:40.629 22:27:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:40.629 22:27:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:40.629 22:27:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:40.629 22:27:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:40.629 22:27:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:40.629 22:27:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:40.629 22:27:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:40.629 22:27:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:40.629 22:27:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:40.629 22:27:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:40.629 22:27:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:40.629 22:27:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:40.629 22:27:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:40.629 22:27:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:40.629 22:27:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:40.629 22:27:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:40.629 22:27:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:40.629 22:27:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:40.629 22:27:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:40.629 Cannot find device "nvmf_tgt_br" 00:07:40.629 22:27:40 -- nvmf/common.sh@154 -- # true 00:07:40.629 22:27:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:40.629 Cannot find device "nvmf_tgt_br2" 00:07:40.629 22:27:40 -- nvmf/common.sh@155 -- # true 00:07:40.629 22:27:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:40.629 22:27:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:40.629 Cannot find device "nvmf_tgt_br" 00:07:40.629 22:27:40 -- nvmf/common.sh@157 -- # true 00:07:40.629 22:27:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:40.629 Cannot find device "nvmf_tgt_br2" 00:07:40.629 22:27:40 -- nvmf/common.sh@158 -- # true 00:07:40.629 22:27:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:40.629 22:27:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:40.629 22:27:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:40.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:40.629 22:27:40 -- nvmf/common.sh@161 -- # true 00:07:40.629 22:27:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:40.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:40.630 22:27:40 -- nvmf/common.sh@162 -- # true 00:07:40.630 22:27:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:40.630 22:27:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:40.630 22:27:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:40.630 22:27:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:40.630 22:27:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:40.630 22:27:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:40.630 22:27:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:40.630 22:27:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:40.630 22:27:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:40.630 22:27:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:40.630 22:27:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:40.630 22:27:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:40.630 22:27:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:40.630 22:27:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:40.630 22:27:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:40.630 22:27:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:40.630 22:27:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:40.630 22:27:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:40.630 22:27:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:40.630 22:27:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:40.630 22:27:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:40.630 22:27:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:40.630 22:27:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:40.630 22:27:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:40.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:40.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:07:40.630 00:07:40.630 --- 10.0.0.2 ping statistics --- 00:07:40.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.630 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:07:40.630 22:27:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:40.630 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:40.630 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:07:40.630 00:07:40.630 --- 10.0.0.3 ping statistics --- 00:07:40.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.630 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:07:40.630 22:27:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:40.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:40.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:07:40.630 00:07:40.630 --- 10.0.0.1 ping statistics --- 00:07:40.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.630 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:07:40.630 22:27:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:40.630 22:27:40 -- nvmf/common.sh@421 -- # return 0 00:07:40.630 22:27:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:40.630 22:27:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:40.630 22:27:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:40.630 22:27:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:40.630 22:27:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:40.630 22:27:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:40.630 22:27:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:40.630 22:27:40 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:40.630 22:27:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:40.630 22:27:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.630 22:27:40 -- common/autotest_common.sh@10 -- # set +x 00:07:40.630 ************************************ 00:07:40.630 START TEST nvmf_filesystem_no_in_capsule 00:07:40.630 ************************************ 00:07:40.630 22:27:40 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:07:40.630 22:27:40 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:40.630 22:27:40 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:40.630 22:27:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:40.630 22:27:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:40.630 22:27:40 -- common/autotest_common.sh@10 -- # set +x 00:07:40.630 22:27:40 -- nvmf/common.sh@469 -- # nvmfpid=72269 00:07:40.630 22:27:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:40.630 22:27:40 -- nvmf/common.sh@470 -- # waitforlisten 72269 00:07:40.630 22:27:40 -- common/autotest_common.sh@829 -- # '[' -z 72269 ']' 00:07:40.630 22:27:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.630 22:27:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:40.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.630 22:27:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.630 22:27:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:40.630 22:27:40 -- common/autotest_common.sh@10 -- # set +x 00:07:40.630 [2024-11-20 22:27:40.580465] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:40.630 [2024-11-20 22:27:40.580566] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.630 [2024-11-20 22:27:40.722597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:40.630 [2024-11-20 22:27:40.811074] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:40.630 [2024-11-20 22:27:40.811288] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:40.630 [2024-11-20 22:27:40.811307] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:40.630 [2024-11-20 22:27:40.811319] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:40.630 [2024-11-20 22:27:40.811431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.630 [2024-11-20 22:27:40.811542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.630 [2024-11-20 22:27:40.812367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.630 [2024-11-20 22:27:40.812393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.889 22:27:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:40.889 22:27:41 -- common/autotest_common.sh@862 -- # return 0 00:07:40.889 22:27:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:40.889 22:27:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:40.889 22:27:41 -- common/autotest_common.sh@10 -- # set +x 00:07:41.147 22:27:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.147 22:27:41 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:41.147 22:27:41 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:41.147 22:27:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.147 22:27:41 -- common/autotest_common.sh@10 -- # set +x 00:07:41.147 [2024-11-20 22:27:41.642887] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.147 22:27:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.147 22:27:41 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:41.147 22:27:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.147 22:27:41 -- common/autotest_common.sh@10 -- # set +x 00:07:41.147 Malloc1 00:07:41.147 22:27:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.147 22:27:41 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:41.147 22:27:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.147 22:27:41 -- common/autotest_common.sh@10 -- # set +x 00:07:41.147 22:27:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.147 22:27:41 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:41.147 22:27:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.147 22:27:41 -- common/autotest_common.sh@10 -- # set +x 00:07:41.147 22:27:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.147 22:27:41 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:41.147 22:27:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.147 22:27:41 -- common/autotest_common.sh@10 -- # set +x 00:07:41.407 [2024-11-20 22:27:41.883208] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.407 22:27:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.407 22:27:41 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:41.407 22:27:41 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:07:41.407 22:27:41 -- common/autotest_common.sh@1368 -- # local bdev_info 00:07:41.407 22:27:41 -- common/autotest_common.sh@1369 -- # local bs 00:07:41.407 22:27:41 -- common/autotest_common.sh@1370 -- # local nb 00:07:41.407 22:27:41 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:41.407 22:27:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.407 22:27:41 -- common/autotest_common.sh@10 -- # set +x 00:07:41.407 22:27:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.407 22:27:41 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:07:41.407 { 00:07:41.407 "aliases": [ 00:07:41.407 "83ccba82-f5b5-4e89-92b8-73d338d82114" 00:07:41.407 ], 00:07:41.407 "assigned_rate_limits": { 00:07:41.407 "r_mbytes_per_sec": 0, 00:07:41.407 "rw_ios_per_sec": 0, 00:07:41.407 "rw_mbytes_per_sec": 0, 00:07:41.407 "w_mbytes_per_sec": 0 00:07:41.407 }, 00:07:41.407 "block_size": 512, 00:07:41.407 "claim_type": "exclusive_write", 00:07:41.407 "claimed": true, 00:07:41.407 "driver_specific": {}, 00:07:41.407 "memory_domains": [ 00:07:41.407 { 00:07:41.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.407 "dma_device_type": 2 00:07:41.407 } 00:07:41.407 ], 00:07:41.407 "name": "Malloc1", 00:07:41.407 "num_blocks": 1048576, 00:07:41.407 "product_name": "Malloc disk", 00:07:41.407 "supported_io_types": { 00:07:41.407 "abort": true, 00:07:41.407 "compare": false, 00:07:41.407 "compare_and_write": false, 00:07:41.407 "flush": true, 00:07:41.407 "nvme_admin": false, 00:07:41.407 "nvme_io": false, 00:07:41.407 "read": true, 00:07:41.407 "reset": true, 00:07:41.407 "unmap": true, 00:07:41.407 "write": true, 00:07:41.407 "write_zeroes": true 00:07:41.407 }, 00:07:41.407 "uuid": "83ccba82-f5b5-4e89-92b8-73d338d82114", 00:07:41.407 "zoned": false 00:07:41.407 } 00:07:41.407 ]' 00:07:41.407 22:27:41 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:07:41.407 22:27:41 -- common/autotest_common.sh@1372 -- # bs=512 00:07:41.407 22:27:41 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:07:41.407 22:27:42 -- common/autotest_common.sh@1373 -- # nb=1048576 00:07:41.407 22:27:42 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:07:41.407 22:27:42 -- common/autotest_common.sh@1377 -- # echo 512 00:07:41.407 22:27:42 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:41.407 22:27:42 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:41.665 22:27:42 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:41.665 22:27:42 -- common/autotest_common.sh@1187 -- # local i=0 00:07:41.665 22:27:42 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:07:41.665 22:27:42 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:07:41.665 22:27:42 -- common/autotest_common.sh@1194 -- # sleep 2 00:07:43.569 22:27:44 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:07:43.569 22:27:44 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:07:43.569 22:27:44 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:07:43.569 22:27:44 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:07:43.569 22:27:44 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:07:43.569 22:27:44 -- common/autotest_common.sh@1197 -- # return 0 00:07:43.569 22:27:44 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:43.569 22:27:44 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:43.569 22:27:44 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:43.569 22:27:44 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:43.569 22:27:44 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:43.570 22:27:44 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:43.570 22:27:44 -- setup/common.sh@80 -- # echo 536870912 00:07:43.570 22:27:44 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:43.570 22:27:44 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:43.570 22:27:44 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:43.570 22:27:44 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:43.570 22:27:44 -- target/filesystem.sh@69 -- # partprobe 00:07:43.828 22:27:44 -- target/filesystem.sh@70 -- # sleep 1 00:07:44.765 22:27:45 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:44.765 22:27:45 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:44.765 22:27:45 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:44.765 22:27:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.765 22:27:45 -- common/autotest_common.sh@10 -- # set +x 00:07:44.765 ************************************ 00:07:44.765 START TEST filesystem_ext4 00:07:44.765 ************************************ 00:07:44.765 22:27:45 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:44.765 22:27:45 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:44.765 22:27:45 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:44.765 22:27:45 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:44.765 22:27:45 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:44.765 22:27:45 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:44.765 22:27:45 -- common/autotest_common.sh@914 -- # local i=0 00:07:44.765 22:27:45 -- common/autotest_common.sh@915 -- # local force 00:07:44.765 22:27:45 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:44.765 22:27:45 -- common/autotest_common.sh@918 -- # force=-F 00:07:44.765 22:27:45 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:44.765 mke2fs 1.47.0 (5-Feb-2023) 00:07:44.765 Discarding device blocks: 0/522240 done 00:07:44.765 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:44.765 Filesystem UUID: 3d177923-8f91-4dca-ab5a-974b5deb217e 00:07:44.765 Superblock backups stored on blocks: 00:07:44.765 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:44.765 00:07:44.765 Allocating group tables: 0/64 done 00:07:44.765 Writing inode tables: 0/64 done 00:07:45.023 Creating journal (8192 blocks): done 00:07:45.023 Writing superblocks and filesystem accounting information: 0/64 done 00:07:45.023 00:07:45.023 22:27:45 -- common/autotest_common.sh@931 -- # return 0 00:07:45.023 22:27:45 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:50.293 22:27:50 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:50.293 22:27:51 -- target/filesystem.sh@25 -- # sync 00:07:50.552 22:27:51 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:50.553 22:27:51 -- target/filesystem.sh@27 -- # sync 00:07:50.553 22:27:51 -- target/filesystem.sh@29 -- # i=0 00:07:50.553 22:27:51 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:50.553 22:27:51 -- target/filesystem.sh@37 -- # kill -0 72269 00:07:50.553 22:27:51 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:50.553 22:27:51 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:50.553 22:27:51 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:50.553 22:27:51 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:50.553 00:07:50.553 real 0m5.769s 00:07:50.553 user 0m0.022s 00:07:50.553 sys 0m0.064s 00:07:50.553 22:27:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:50.553 22:27:51 -- common/autotest_common.sh@10 -- # set +x 00:07:50.553 ************************************ 00:07:50.553 END TEST filesystem_ext4 00:07:50.553 ************************************ 00:07:50.553 22:27:51 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:50.553 22:27:51 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:50.553 22:27:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.553 22:27:51 -- common/autotest_common.sh@10 -- # set +x 00:07:50.553 ************************************ 00:07:50.553 START TEST filesystem_btrfs 00:07:50.553 ************************************ 00:07:50.553 22:27:51 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:50.553 22:27:51 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:50.553 22:27:51 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:50.553 22:27:51 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:50.553 22:27:51 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:50.553 22:27:51 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:50.553 22:27:51 -- common/autotest_common.sh@914 -- # local i=0 00:07:50.553 22:27:51 -- common/autotest_common.sh@915 -- # local force 00:07:50.553 22:27:51 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:50.553 22:27:51 -- common/autotest_common.sh@920 -- # force=-f 00:07:50.553 22:27:51 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:50.812 btrfs-progs v6.8.1 00:07:50.812 See https://btrfs.readthedocs.io for more information. 00:07:50.812 00:07:50.812 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:50.812 NOTE: several default settings have changed in version 5.15, please make sure 00:07:50.812 this does not affect your deployments: 00:07:50.812 - DUP for metadata (-m dup) 00:07:50.812 - enabled no-holes (-O no-holes) 00:07:50.812 - enabled free-space-tree (-R free-space-tree) 00:07:50.812 00:07:50.812 Label: (null) 00:07:50.812 UUID: e78ffdf5-155a-4c34-b6fe-d36cae9300a4 00:07:50.812 Node size: 16384 00:07:50.812 Sector size: 4096 (CPU page size: 4096) 00:07:50.812 Filesystem size: 510.00MiB 00:07:50.812 Block group profiles: 00:07:50.812 Data: single 8.00MiB 00:07:50.812 Metadata: DUP 32.00MiB 00:07:50.812 System: DUP 8.00MiB 00:07:50.812 SSD detected: yes 00:07:50.812 Zoned device: no 00:07:50.812 Features: extref, skinny-metadata, no-holes, free-space-tree 00:07:50.812 Checksum: crc32c 00:07:50.812 Number of devices: 1 00:07:50.812 Devices: 00:07:50.812 ID SIZE PATH 00:07:50.812 1 510.00MiB /dev/nvme0n1p1 00:07:50.812 00:07:50.812 22:27:51 -- common/autotest_common.sh@931 -- # return 0 00:07:50.812 22:27:51 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:50.812 22:27:51 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:50.812 22:27:51 -- target/filesystem.sh@25 -- # sync 00:07:50.812 22:27:51 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:50.812 22:27:51 -- target/filesystem.sh@27 -- # sync 00:07:50.812 22:27:51 -- target/filesystem.sh@29 -- # i=0 00:07:50.812 22:27:51 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:50.812 22:27:51 -- target/filesystem.sh@37 -- # kill -0 72269 00:07:50.812 22:27:51 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:50.812 22:27:51 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:50.812 22:27:51 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:50.812 22:27:51 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:50.812 ************************************ 00:07:50.812 END TEST filesystem_btrfs 00:07:50.812 ************************************ 00:07:50.812 00:07:50.812 real 0m0.229s 00:07:50.812 user 0m0.023s 00:07:50.812 sys 0m0.067s 00:07:50.812 22:27:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:50.812 22:27:51 -- common/autotest_common.sh@10 -- # set +x 00:07:50.812 22:27:51 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:50.812 22:27:51 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:50.812 22:27:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.812 22:27:51 -- common/autotest_common.sh@10 -- # set +x 00:07:50.812 ************************************ 00:07:50.812 START TEST filesystem_xfs 00:07:50.812 ************************************ 00:07:50.812 22:27:51 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:07:50.812 22:27:51 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:50.812 22:27:51 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:50.812 22:27:51 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:50.812 22:27:51 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:50.812 22:27:51 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:50.812 22:27:51 -- common/autotest_common.sh@914 -- # local i=0 00:07:50.812 22:27:51 -- common/autotest_common.sh@915 -- # local force 00:07:50.812 22:27:51 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:50.812 22:27:51 -- common/autotest_common.sh@920 -- # force=-f 00:07:50.812 22:27:51 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:51.071 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:51.071 = sectsz=512 attr=2, projid32bit=1 00:07:51.071 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:51.071 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:51.071 data = bsize=4096 blocks=130560, imaxpct=25 00:07:51.071 = sunit=0 swidth=0 blks 00:07:51.071 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:51.071 log =internal log bsize=4096 blocks=16384, version=2 00:07:51.071 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:51.071 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:51.645 Discarding blocks...Done. 00:07:51.645 22:27:52 -- common/autotest_common.sh@931 -- # return 0 00:07:51.645 22:27:52 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:54.253 22:27:54 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:54.253 22:27:54 -- target/filesystem.sh@25 -- # sync 00:07:54.253 22:27:54 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:54.253 22:27:54 -- target/filesystem.sh@27 -- # sync 00:07:54.253 22:27:54 -- target/filesystem.sh@29 -- # i=0 00:07:54.253 22:27:54 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:54.253 22:27:54 -- target/filesystem.sh@37 -- # kill -0 72269 00:07:54.253 22:27:54 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:54.253 22:27:54 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:54.253 22:27:54 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:54.253 22:27:54 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:54.253 ************************************ 00:07:54.253 END TEST filesystem_xfs 00:07:54.253 ************************************ 00:07:54.253 00:07:54.253 real 0m3.226s 00:07:54.253 user 0m0.017s 00:07:54.253 sys 0m0.062s 00:07:54.253 22:27:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:54.253 22:27:54 -- common/autotest_common.sh@10 -- # set +x 00:07:54.253 22:27:54 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:54.253 22:27:54 -- target/filesystem.sh@93 -- # sync 00:07:54.253 22:27:54 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:54.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:54.253 22:27:54 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:54.253 22:27:54 -- common/autotest_common.sh@1208 -- # local i=0 00:07:54.253 22:27:54 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:07:54.253 22:27:54 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:54.253 22:27:54 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:07:54.253 22:27:54 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:54.253 22:27:54 -- common/autotest_common.sh@1220 -- # return 0 00:07:54.253 22:27:54 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:54.253 22:27:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.253 22:27:54 -- common/autotest_common.sh@10 -- # set +x 00:07:54.253 22:27:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.253 22:27:54 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:54.253 22:27:54 -- target/filesystem.sh@101 -- # killprocess 72269 00:07:54.253 22:27:54 -- common/autotest_common.sh@936 -- # '[' -z 72269 ']' 00:07:54.253 22:27:54 -- common/autotest_common.sh@940 -- # kill -0 72269 00:07:54.253 22:27:54 -- common/autotest_common.sh@941 -- # uname 00:07:54.253 22:27:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:54.253 22:27:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72269 00:07:54.253 killing process with pid 72269 00:07:54.253 22:27:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:54.253 22:27:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:54.253 22:27:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72269' 00:07:54.253 22:27:54 -- common/autotest_common.sh@955 -- # kill 72269 00:07:54.253 22:27:54 -- common/autotest_common.sh@960 -- # wait 72269 00:07:54.821 22:27:55 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:54.821 00:07:54.821 real 0m14.868s 00:07:54.821 user 0m57.314s 00:07:54.821 sys 0m1.696s 00:07:54.821 ************************************ 00:07:54.821 END TEST nvmf_filesystem_no_in_capsule 00:07:54.821 ************************************ 00:07:54.821 22:27:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:54.821 22:27:55 -- common/autotest_common.sh@10 -- # set +x 00:07:54.821 22:27:55 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:54.821 22:27:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:54.821 22:27:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.821 22:27:55 -- common/autotest_common.sh@10 -- # set +x 00:07:54.821 ************************************ 00:07:54.821 START TEST nvmf_filesystem_in_capsule 00:07:54.821 ************************************ 00:07:54.821 22:27:55 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:07:54.821 22:27:55 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:54.821 22:27:55 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:54.821 22:27:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:54.821 22:27:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:54.821 22:27:55 -- common/autotest_common.sh@10 -- # set +x 00:07:54.821 22:27:55 -- nvmf/common.sh@469 -- # nvmfpid=72646 00:07:54.821 22:27:55 -- nvmf/common.sh@470 -- # waitforlisten 72646 00:07:54.821 22:27:55 -- common/autotest_common.sh@829 -- # '[' -z 72646 ']' 00:07:54.821 22:27:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.821 22:27:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:54.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.821 22:27:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:54.821 22:27:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.821 22:27:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:54.821 22:27:55 -- common/autotest_common.sh@10 -- # set +x 00:07:54.821 [2024-11-20 22:27:55.508080] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:54.821 [2024-11-20 22:27:55.508196] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.080 [2024-11-20 22:27:55.647501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:55.080 [2024-11-20 22:27:55.710145] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:55.080 [2024-11-20 22:27:55.710580] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.080 [2024-11-20 22:27:55.710679] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.080 [2024-11-20 22:27:55.710758] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.080 [2024-11-20 22:27:55.710965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.080 [2024-11-20 22:27:55.711340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.080 [2024-11-20 22:27:55.711422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.080 [2024-11-20 22:27:55.711431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.017 22:27:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.017 22:27:56 -- common/autotest_common.sh@862 -- # return 0 00:07:56.017 22:27:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:56.017 22:27:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:56.017 22:27:56 -- common/autotest_common.sh@10 -- # set +x 00:07:56.017 22:27:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.017 22:27:56 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:56.017 22:27:56 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:56.017 22:27:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.017 22:27:56 -- common/autotest_common.sh@10 -- # set +x 00:07:56.017 [2024-11-20 22:27:56.488036] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.017 22:27:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.017 22:27:56 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:56.017 22:27:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.017 22:27:56 -- common/autotest_common.sh@10 -- # set +x 00:07:56.017 Malloc1 00:07:56.017 22:27:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.017 22:27:56 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:56.017 22:27:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.017 22:27:56 -- common/autotest_common.sh@10 -- # set +x 00:07:56.017 22:27:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.017 22:27:56 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:56.017 22:27:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.017 22:27:56 -- common/autotest_common.sh@10 -- # set +x 00:07:56.017 22:27:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.017 22:27:56 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:56.017 22:27:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.017 22:27:56 -- common/autotest_common.sh@10 -- # set +x 00:07:56.017 [2024-11-20 22:27:56.712880] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.017 22:27:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.017 22:27:56 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:56.017 22:27:56 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:07:56.017 22:27:56 -- common/autotest_common.sh@1368 -- # local bdev_info 00:07:56.017 22:27:56 -- common/autotest_common.sh@1369 -- # local bs 00:07:56.017 22:27:56 -- common/autotest_common.sh@1370 -- # local nb 00:07:56.017 22:27:56 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:56.017 22:27:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.017 22:27:56 -- common/autotest_common.sh@10 -- # set +x 00:07:56.017 22:27:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.017 22:27:56 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:07:56.017 { 00:07:56.017 "aliases": [ 00:07:56.017 "bbff693b-5587-4e56-b890-484bcaf9094c" 00:07:56.017 ], 00:07:56.017 "assigned_rate_limits": { 00:07:56.017 "r_mbytes_per_sec": 0, 00:07:56.017 "rw_ios_per_sec": 0, 00:07:56.017 "rw_mbytes_per_sec": 0, 00:07:56.017 "w_mbytes_per_sec": 0 00:07:56.017 }, 00:07:56.017 "block_size": 512, 00:07:56.017 "claim_type": "exclusive_write", 00:07:56.017 "claimed": true, 00:07:56.017 "driver_specific": {}, 00:07:56.017 "memory_domains": [ 00:07:56.017 { 00:07:56.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.017 "dma_device_type": 2 00:07:56.017 } 00:07:56.017 ], 00:07:56.017 "name": "Malloc1", 00:07:56.017 "num_blocks": 1048576, 00:07:56.017 "product_name": "Malloc disk", 00:07:56.017 "supported_io_types": { 00:07:56.017 "abort": true, 00:07:56.017 "compare": false, 00:07:56.017 "compare_and_write": false, 00:07:56.017 "flush": true, 00:07:56.017 "nvme_admin": false, 00:07:56.017 "nvme_io": false, 00:07:56.017 "read": true, 00:07:56.017 "reset": true, 00:07:56.017 "unmap": true, 00:07:56.017 "write": true, 00:07:56.017 "write_zeroes": true 00:07:56.017 }, 00:07:56.017 "uuid": "bbff693b-5587-4e56-b890-484bcaf9094c", 00:07:56.017 "zoned": false 00:07:56.017 } 00:07:56.017 ]' 00:07:56.017 22:27:56 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:07:56.276 22:27:56 -- common/autotest_common.sh@1372 -- # bs=512 00:07:56.276 22:27:56 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:07:56.276 22:27:56 -- common/autotest_common.sh@1373 -- # nb=1048576 00:07:56.276 22:27:56 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:07:56.276 22:27:56 -- common/autotest_common.sh@1377 -- # echo 512 00:07:56.276 22:27:56 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:56.276 22:27:56 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:56.535 22:27:57 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:56.535 22:27:57 -- common/autotest_common.sh@1187 -- # local i=0 00:07:56.535 22:27:57 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:07:56.535 22:27:57 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:07:56.535 22:27:57 -- common/autotest_common.sh@1194 -- # sleep 2 00:07:58.438 22:27:59 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:07:58.438 22:27:59 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:07:58.438 22:27:59 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:07:58.438 22:27:59 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:07:58.438 22:27:59 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:07:58.438 22:27:59 -- common/autotest_common.sh@1197 -- # return 0 00:07:58.438 22:27:59 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:58.438 22:27:59 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:58.438 22:27:59 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:58.438 22:27:59 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:58.438 22:27:59 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:58.438 22:27:59 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:58.438 22:27:59 -- setup/common.sh@80 -- # echo 536870912 00:07:58.438 22:27:59 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:58.438 22:27:59 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:58.438 22:27:59 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:58.438 22:27:59 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:58.438 22:27:59 -- target/filesystem.sh@69 -- # partprobe 00:07:58.695 22:27:59 -- target/filesystem.sh@70 -- # sleep 1 00:07:59.631 22:28:00 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:59.631 22:28:00 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:59.631 22:28:00 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:59.631 22:28:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:59.631 22:28:00 -- common/autotest_common.sh@10 -- # set +x 00:07:59.631 ************************************ 00:07:59.631 START TEST filesystem_in_capsule_ext4 00:07:59.631 ************************************ 00:07:59.631 22:28:00 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:59.631 22:28:00 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:59.631 22:28:00 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:59.631 22:28:00 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:59.631 22:28:00 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:59.631 22:28:00 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:59.631 22:28:00 -- common/autotest_common.sh@914 -- # local i=0 00:07:59.631 22:28:00 -- common/autotest_common.sh@915 -- # local force 00:07:59.631 22:28:00 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:59.631 22:28:00 -- common/autotest_common.sh@918 -- # force=-F 00:07:59.631 22:28:00 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:59.631 mke2fs 1.47.0 (5-Feb-2023) 00:07:59.631 Discarding device blocks: 0/522240 done 00:07:59.631 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:59.631 Filesystem UUID: 39f10d58-0532-4e92-a6b4-e907dcf208bc 00:07:59.631 Superblock backups stored on blocks: 00:07:59.631 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:59.631 00:07:59.631 Allocating group tables: 0/64 done 00:07:59.631 Writing inode tables: 0/64 done 00:07:59.631 Creating journal (8192 blocks): done 00:07:59.631 Writing superblocks and filesystem accounting information: 0/64 done 00:07:59.631 00:07:59.631 22:28:00 -- common/autotest_common.sh@931 -- # return 0 00:07:59.631 22:28:00 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:06.195 22:28:05 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:06.195 22:28:05 -- target/filesystem.sh@25 -- # sync 00:08:06.195 22:28:05 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:06.195 22:28:05 -- target/filesystem.sh@27 -- # sync 00:08:06.195 22:28:05 -- target/filesystem.sh@29 -- # i=0 00:08:06.195 22:28:05 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:06.195 22:28:05 -- target/filesystem.sh@37 -- # kill -0 72646 00:08:06.195 22:28:05 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:06.195 22:28:05 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:06.195 22:28:05 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:06.195 22:28:05 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:06.195 ************************************ 00:08:06.195 END TEST filesystem_in_capsule_ext4 00:08:06.195 ************************************ 00:08:06.195 00:08:06.195 real 0m5.592s 00:08:06.195 user 0m0.032s 00:08:06.195 sys 0m0.057s 00:08:06.195 22:28:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:06.195 22:28:05 -- common/autotest_common.sh@10 -- # set +x 00:08:06.195 22:28:05 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:06.195 22:28:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:06.195 22:28:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:06.195 22:28:05 -- common/autotest_common.sh@10 -- # set +x 00:08:06.195 ************************************ 00:08:06.195 START TEST filesystem_in_capsule_btrfs 00:08:06.195 ************************************ 00:08:06.195 22:28:05 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:06.195 22:28:05 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:06.195 22:28:05 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:06.195 22:28:05 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:06.195 22:28:05 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:06.195 22:28:05 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:06.195 22:28:05 -- common/autotest_common.sh@914 -- # local i=0 00:08:06.195 22:28:05 -- common/autotest_common.sh@915 -- # local force 00:08:06.195 22:28:05 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:06.195 22:28:05 -- common/autotest_common.sh@920 -- # force=-f 00:08:06.195 22:28:05 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:06.195 btrfs-progs v6.8.1 00:08:06.195 See https://btrfs.readthedocs.io for more information. 00:08:06.195 00:08:06.195 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:06.195 NOTE: several default settings have changed in version 5.15, please make sure 00:08:06.195 this does not affect your deployments: 00:08:06.195 - DUP for metadata (-m dup) 00:08:06.195 - enabled no-holes (-O no-holes) 00:08:06.195 - enabled free-space-tree (-R free-space-tree) 00:08:06.195 00:08:06.195 Label: (null) 00:08:06.195 UUID: 9ff5ae78-ba47-4921-a676-f320e5b19937 00:08:06.195 Node size: 16384 00:08:06.195 Sector size: 4096 (CPU page size: 4096) 00:08:06.195 Filesystem size: 510.00MiB 00:08:06.195 Block group profiles: 00:08:06.195 Data: single 8.00MiB 00:08:06.195 Metadata: DUP 32.00MiB 00:08:06.195 System: DUP 8.00MiB 00:08:06.195 SSD detected: yes 00:08:06.195 Zoned device: no 00:08:06.195 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:06.195 Checksum: crc32c 00:08:06.195 Number of devices: 1 00:08:06.195 Devices: 00:08:06.195 ID SIZE PATH 00:08:06.195 1 510.00MiB /dev/nvme0n1p1 00:08:06.195 00:08:06.195 22:28:06 -- common/autotest_common.sh@931 -- # return 0 00:08:06.195 22:28:06 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:06.195 22:28:06 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:06.195 22:28:06 -- target/filesystem.sh@25 -- # sync 00:08:06.195 22:28:06 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:06.195 22:28:06 -- target/filesystem.sh@27 -- # sync 00:08:06.195 22:28:06 -- target/filesystem.sh@29 -- # i=0 00:08:06.195 22:28:06 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:06.195 22:28:06 -- target/filesystem.sh@37 -- # kill -0 72646 00:08:06.195 22:28:06 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:06.195 22:28:06 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:06.195 22:28:06 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:06.195 22:28:06 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:06.195 ************************************ 00:08:06.195 END TEST filesystem_in_capsule_btrfs 00:08:06.195 ************************************ 00:08:06.195 00:08:06.195 real 0m0.269s 00:08:06.195 user 0m0.023s 00:08:06.195 sys 0m0.062s 00:08:06.195 22:28:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:06.195 22:28:06 -- common/autotest_common.sh@10 -- # set +x 00:08:06.195 22:28:06 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:06.195 22:28:06 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:06.195 22:28:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:06.195 22:28:06 -- common/autotest_common.sh@10 -- # set +x 00:08:06.195 ************************************ 00:08:06.195 START TEST filesystem_in_capsule_xfs 00:08:06.195 ************************************ 00:08:06.195 22:28:06 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:06.195 22:28:06 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:06.195 22:28:06 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:06.195 22:28:06 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:06.195 22:28:06 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:06.195 22:28:06 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:06.195 22:28:06 -- common/autotest_common.sh@914 -- # local i=0 00:08:06.195 22:28:06 -- common/autotest_common.sh@915 -- # local force 00:08:06.195 22:28:06 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:06.195 22:28:06 -- common/autotest_common.sh@920 -- # force=-f 00:08:06.195 22:28:06 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:06.195 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:06.195 = sectsz=512 attr=2, projid32bit=1 00:08:06.195 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:06.195 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:06.195 data = bsize=4096 blocks=130560, imaxpct=25 00:08:06.195 = sunit=0 swidth=0 blks 00:08:06.195 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:06.195 log =internal log bsize=4096 blocks=16384, version=2 00:08:06.195 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:06.195 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:06.454 Discarding blocks...Done. 00:08:06.454 22:28:06 -- common/autotest_common.sh@931 -- # return 0 00:08:06.454 22:28:06 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:08.359 22:28:08 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:08.359 22:28:08 -- target/filesystem.sh@25 -- # sync 00:08:08.359 22:28:08 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:08.359 22:28:08 -- target/filesystem.sh@27 -- # sync 00:08:08.359 22:28:08 -- target/filesystem.sh@29 -- # i=0 00:08:08.359 22:28:08 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:08.359 22:28:08 -- target/filesystem.sh@37 -- # kill -0 72646 00:08:08.359 22:28:08 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:08.359 22:28:08 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:08.359 22:28:08 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:08.359 22:28:08 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:08.359 ************************************ 00:08:08.359 END TEST filesystem_in_capsule_xfs 00:08:08.359 ************************************ 00:08:08.359 00:08:08.359 real 0m2.645s 00:08:08.359 user 0m0.015s 00:08:08.359 sys 0m0.064s 00:08:08.359 22:28:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:08.359 22:28:08 -- common/autotest_common.sh@10 -- # set +x 00:08:08.359 22:28:08 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:08.359 22:28:08 -- target/filesystem.sh@93 -- # sync 00:08:08.359 22:28:08 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:08.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.359 22:28:09 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:08.359 22:28:09 -- common/autotest_common.sh@1208 -- # local i=0 00:08:08.359 22:28:09 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:08.359 22:28:09 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:08.359 22:28:09 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:08.359 22:28:09 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:08.359 22:28:09 -- common/autotest_common.sh@1220 -- # return 0 00:08:08.359 22:28:09 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:08.359 22:28:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.359 22:28:09 -- common/autotest_common.sh@10 -- # set +x 00:08:08.359 22:28:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.359 22:28:09 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:08.359 22:28:09 -- target/filesystem.sh@101 -- # killprocess 72646 00:08:08.359 22:28:09 -- common/autotest_common.sh@936 -- # '[' -z 72646 ']' 00:08:08.359 22:28:09 -- common/autotest_common.sh@940 -- # kill -0 72646 00:08:08.359 22:28:09 -- common/autotest_common.sh@941 -- # uname 00:08:08.359 22:28:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:08.359 22:28:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72646 00:08:08.618 killing process with pid 72646 00:08:08.618 22:28:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:08.618 22:28:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:08.618 22:28:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72646' 00:08:08.618 22:28:09 -- common/autotest_common.sh@955 -- # kill 72646 00:08:08.618 22:28:09 -- common/autotest_common.sh@960 -- # wait 72646 00:08:09.185 22:28:09 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:09.185 00:08:09.185 real 0m14.194s 00:08:09.185 user 0m54.747s 00:08:09.185 sys 0m1.632s 00:08:09.185 22:28:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:09.185 ************************************ 00:08:09.185 END TEST nvmf_filesystem_in_capsule 00:08:09.185 ************************************ 00:08:09.185 22:28:09 -- common/autotest_common.sh@10 -- # set +x 00:08:09.185 22:28:09 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:09.185 22:28:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:09.185 22:28:09 -- nvmf/common.sh@116 -- # sync 00:08:09.185 22:28:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:09.185 22:28:09 -- nvmf/common.sh@119 -- # set +e 00:08:09.185 22:28:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:09.185 22:28:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:09.185 rmmod nvme_tcp 00:08:09.185 rmmod nvme_fabrics 00:08:09.185 rmmod nvme_keyring 00:08:09.185 22:28:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:09.185 22:28:09 -- nvmf/common.sh@123 -- # set -e 00:08:09.185 22:28:09 -- nvmf/common.sh@124 -- # return 0 00:08:09.185 22:28:09 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:09.185 22:28:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:09.185 22:28:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:09.185 22:28:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:09.185 22:28:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:09.185 22:28:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:09.185 22:28:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.185 22:28:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.185 22:28:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.185 22:28:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:09.185 00:08:09.185 real 0m30.044s 00:08:09.185 user 1m52.443s 00:08:09.185 sys 0m3.768s 00:08:09.185 22:28:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:09.185 22:28:09 -- common/autotest_common.sh@10 -- # set +x 00:08:09.185 ************************************ 00:08:09.185 END TEST nvmf_filesystem 00:08:09.185 ************************************ 00:08:09.185 22:28:09 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:09.185 22:28:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:09.185 22:28:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:09.185 22:28:09 -- common/autotest_common.sh@10 -- # set +x 00:08:09.185 ************************************ 00:08:09.185 START TEST nvmf_discovery 00:08:09.185 ************************************ 00:08:09.185 22:28:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:09.445 * Looking for test storage... 00:08:09.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:09.445 22:28:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:09.445 22:28:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:09.445 22:28:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:09.445 22:28:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:09.445 22:28:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:09.445 22:28:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:09.445 22:28:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:09.445 22:28:10 -- scripts/common.sh@335 -- # IFS=.-: 00:08:09.445 22:28:10 -- scripts/common.sh@335 -- # read -ra ver1 00:08:09.445 22:28:10 -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.445 22:28:10 -- scripts/common.sh@336 -- # read -ra ver2 00:08:09.445 22:28:10 -- scripts/common.sh@337 -- # local 'op=<' 00:08:09.445 22:28:10 -- scripts/common.sh@339 -- # ver1_l=2 00:08:09.445 22:28:10 -- scripts/common.sh@340 -- # ver2_l=1 00:08:09.445 22:28:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:09.445 22:28:10 -- scripts/common.sh@343 -- # case "$op" in 00:08:09.445 22:28:10 -- scripts/common.sh@344 -- # : 1 00:08:09.445 22:28:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:09.445 22:28:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.445 22:28:10 -- scripts/common.sh@364 -- # decimal 1 00:08:09.445 22:28:10 -- scripts/common.sh@352 -- # local d=1 00:08:09.445 22:28:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.445 22:28:10 -- scripts/common.sh@354 -- # echo 1 00:08:09.445 22:28:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:09.445 22:28:10 -- scripts/common.sh@365 -- # decimal 2 00:08:09.445 22:28:10 -- scripts/common.sh@352 -- # local d=2 00:08:09.445 22:28:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.445 22:28:10 -- scripts/common.sh@354 -- # echo 2 00:08:09.445 22:28:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:09.445 22:28:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:09.445 22:28:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:09.445 22:28:10 -- scripts/common.sh@367 -- # return 0 00:08:09.445 22:28:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.445 22:28:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:09.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.446 --rc genhtml_branch_coverage=1 00:08:09.446 --rc genhtml_function_coverage=1 00:08:09.446 --rc genhtml_legend=1 00:08:09.446 --rc geninfo_all_blocks=1 00:08:09.446 --rc geninfo_unexecuted_blocks=1 00:08:09.446 00:08:09.446 ' 00:08:09.446 22:28:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:09.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.446 --rc genhtml_branch_coverage=1 00:08:09.446 --rc genhtml_function_coverage=1 00:08:09.446 --rc genhtml_legend=1 00:08:09.446 --rc geninfo_all_blocks=1 00:08:09.446 --rc geninfo_unexecuted_blocks=1 00:08:09.446 00:08:09.446 ' 00:08:09.446 22:28:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:09.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.446 --rc genhtml_branch_coverage=1 00:08:09.446 --rc genhtml_function_coverage=1 00:08:09.446 --rc genhtml_legend=1 00:08:09.446 --rc geninfo_all_blocks=1 00:08:09.446 --rc geninfo_unexecuted_blocks=1 00:08:09.446 00:08:09.446 ' 00:08:09.446 22:28:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:09.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.446 --rc genhtml_branch_coverage=1 00:08:09.446 --rc genhtml_function_coverage=1 00:08:09.446 --rc genhtml_legend=1 00:08:09.446 --rc geninfo_all_blocks=1 00:08:09.446 --rc geninfo_unexecuted_blocks=1 00:08:09.446 00:08:09.446 ' 00:08:09.446 22:28:10 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:09.446 22:28:10 -- nvmf/common.sh@7 -- # uname -s 00:08:09.446 22:28:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.446 22:28:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.446 22:28:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.446 22:28:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.446 22:28:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.446 22:28:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.446 22:28:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.446 22:28:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.446 22:28:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.446 22:28:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.446 22:28:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:08:09.446 22:28:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:08:09.446 22:28:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.446 22:28:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.446 22:28:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:09.446 22:28:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:09.446 22:28:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.446 22:28:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.446 22:28:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.446 22:28:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.446 22:28:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.446 22:28:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.446 22:28:10 -- paths/export.sh@5 -- # export PATH 00:08:09.446 22:28:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.446 22:28:10 -- nvmf/common.sh@46 -- # : 0 00:08:09.446 22:28:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:09.446 22:28:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:09.446 22:28:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:09.446 22:28:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.446 22:28:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.446 22:28:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:09.446 22:28:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:09.446 22:28:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:09.446 22:28:10 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:09.446 22:28:10 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:09.446 22:28:10 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:09.446 22:28:10 -- target/discovery.sh@15 -- # hash nvme 00:08:09.446 22:28:10 -- target/discovery.sh@20 -- # nvmftestinit 00:08:09.446 22:28:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:09.446 22:28:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.446 22:28:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:09.446 22:28:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:09.446 22:28:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:09.446 22:28:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.446 22:28:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.446 22:28:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.446 22:28:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:09.446 22:28:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:09.446 22:28:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:09.446 22:28:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:09.446 22:28:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:09.446 22:28:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:09.446 22:28:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.446 22:28:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.446 22:28:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:09.446 22:28:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:09.446 22:28:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:09.446 22:28:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:09.446 22:28:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:09.446 22:28:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.446 22:28:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:09.446 22:28:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:09.446 22:28:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:09.446 22:28:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:09.446 22:28:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:09.446 22:28:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:09.446 Cannot find device "nvmf_tgt_br" 00:08:09.446 22:28:10 -- nvmf/common.sh@154 -- # true 00:08:09.446 22:28:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:09.446 Cannot find device "nvmf_tgt_br2" 00:08:09.446 22:28:10 -- nvmf/common.sh@155 -- # true 00:08:09.446 22:28:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:09.446 22:28:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:09.446 Cannot find device "nvmf_tgt_br" 00:08:09.446 22:28:10 -- nvmf/common.sh@157 -- # true 00:08:09.446 22:28:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:09.446 Cannot find device "nvmf_tgt_br2" 00:08:09.446 22:28:10 -- nvmf/common.sh@158 -- # true 00:08:09.446 22:28:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:09.706 22:28:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:09.706 22:28:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:09.706 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:09.706 22:28:10 -- nvmf/common.sh@161 -- # true 00:08:09.706 22:28:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:09.706 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:09.706 22:28:10 -- nvmf/common.sh@162 -- # true 00:08:09.706 22:28:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:09.706 22:28:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:09.706 22:28:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:09.706 22:28:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:09.706 22:28:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:09.706 22:28:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:09.706 22:28:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:09.706 22:28:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:09.706 22:28:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:09.706 22:28:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:09.706 22:28:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:09.706 22:28:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:09.706 22:28:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:09.706 22:28:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:09.706 22:28:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:09.706 22:28:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:09.706 22:28:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:09.706 22:28:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:09.706 22:28:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:09.706 22:28:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:09.706 22:28:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:09.706 22:28:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:09.706 22:28:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:09.706 22:28:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:09.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:08:09.706 00:08:09.706 --- 10.0.0.2 ping statistics --- 00:08:09.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.706 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:08:09.706 22:28:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:09.706 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:09.706 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:08:09.706 00:08:09.706 --- 10.0.0.3 ping statistics --- 00:08:09.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.706 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:09.706 22:28:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:09.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:08:09.706 00:08:09.706 --- 10.0.0.1 ping statistics --- 00:08:09.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.706 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:08:09.706 22:28:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.706 22:28:10 -- nvmf/common.sh@421 -- # return 0 00:08:09.706 22:28:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:09.706 22:28:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.706 22:28:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:09.706 22:28:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:09.706 22:28:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.706 22:28:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:09.706 22:28:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:09.965 22:28:10 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:09.965 22:28:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:09.965 22:28:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:09.965 22:28:10 -- common/autotest_common.sh@10 -- # set +x 00:08:09.965 22:28:10 -- nvmf/common.sh@469 -- # nvmfpid=73193 00:08:09.965 22:28:10 -- nvmf/common.sh@470 -- # waitforlisten 73193 00:08:09.965 22:28:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:09.965 22:28:10 -- common/autotest_common.sh@829 -- # '[' -z 73193 ']' 00:08:09.965 22:28:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.965 22:28:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:09.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.965 22:28:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.965 22:28:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:09.965 22:28:10 -- common/autotest_common.sh@10 -- # set +x 00:08:09.965 [2024-11-20 22:28:10.502486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:09.965 [2024-11-20 22:28:10.502574] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.965 [2024-11-20 22:28:10.642661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.223 [2024-11-20 22:28:10.714082] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:10.223 [2024-11-20 22:28:10.714259] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.223 [2024-11-20 22:28:10.714290] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.223 [2024-11-20 22:28:10.714304] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.223 [2024-11-20 22:28:10.714480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.223 [2024-11-20 22:28:10.715251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.223 [2024-11-20 22:28:10.715397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.223 [2024-11-20 22:28:10.715416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.789 22:28:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:10.789 22:28:11 -- common/autotest_common.sh@862 -- # return 0 00:08:10.789 22:28:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:10.789 22:28:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:10.789 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:10.789 22:28:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.789 22:28:11 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:10.789 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.789 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:10.789 [2024-11-20 22:28:11.518187] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.048 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.048 22:28:11 -- target/discovery.sh@26 -- # seq 1 4 00:08:11.048 22:28:11 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:11.048 22:28:11 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:11.048 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.048 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.048 Null1 00:08:11.048 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.048 22:28:11 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:11.048 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.048 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.048 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.048 22:28:11 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:11.048 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.048 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.048 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.048 22:28:11 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:11.048 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.048 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.048 [2024-11-20 22:28:11.578928] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:11.048 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.048 22:28:11 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:11.048 22:28:11 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:11.048 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.048 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.048 Null2 00:08:11.048 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.048 22:28:11 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:11.048 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.048 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.048 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.048 22:28:11 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:11.048 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.048 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.048 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.049 22:28:11 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:11.049 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.049 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.049 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.049 22:28:11 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:11.049 22:28:11 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:11.049 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.049 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.049 Null3 00:08:11.049 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.049 22:28:11 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:11.049 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.049 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.049 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.049 22:28:11 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:11.049 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.049 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.049 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.049 22:28:11 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:11.049 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.049 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.049 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.049 22:28:11 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:11.049 22:28:11 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:11.049 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.049 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.049 Null4 00:08:11.049 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.049 22:28:11 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:11.049 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.049 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.049 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.049 22:28:11 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:11.049 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.049 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.049 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.049 22:28:11 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:11.049 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.049 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.049 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.049 22:28:11 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:11.049 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.049 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.049 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.049 22:28:11 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:11.049 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.049 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.049 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.049 22:28:11 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -a 10.0.0.2 -s 4420 00:08:11.308 00:08:11.308 Discovery Log Number of Records 6, Generation counter 6 00:08:11.308 =====Discovery Log Entry 0====== 00:08:11.308 trtype: tcp 00:08:11.308 adrfam: ipv4 00:08:11.308 subtype: current discovery subsystem 00:08:11.308 treq: not required 00:08:11.308 portid: 0 00:08:11.308 trsvcid: 4420 00:08:11.308 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:11.308 traddr: 10.0.0.2 00:08:11.308 eflags: explicit discovery connections, duplicate discovery information 00:08:11.308 sectype: none 00:08:11.308 =====Discovery Log Entry 1====== 00:08:11.308 trtype: tcp 00:08:11.308 adrfam: ipv4 00:08:11.308 subtype: nvme subsystem 00:08:11.308 treq: not required 00:08:11.308 portid: 0 00:08:11.308 trsvcid: 4420 00:08:11.308 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:11.308 traddr: 10.0.0.2 00:08:11.308 eflags: none 00:08:11.308 sectype: none 00:08:11.308 =====Discovery Log Entry 2====== 00:08:11.308 trtype: tcp 00:08:11.308 adrfam: ipv4 00:08:11.308 subtype: nvme subsystem 00:08:11.308 treq: not required 00:08:11.308 portid: 0 00:08:11.308 trsvcid: 4420 00:08:11.308 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:11.308 traddr: 10.0.0.2 00:08:11.308 eflags: none 00:08:11.308 sectype: none 00:08:11.308 =====Discovery Log Entry 3====== 00:08:11.308 trtype: tcp 00:08:11.308 adrfam: ipv4 00:08:11.308 subtype: nvme subsystem 00:08:11.308 treq: not required 00:08:11.308 portid: 0 00:08:11.308 trsvcid: 4420 00:08:11.308 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:11.308 traddr: 10.0.0.2 00:08:11.308 eflags: none 00:08:11.308 sectype: none 00:08:11.308 =====Discovery Log Entry 4====== 00:08:11.308 trtype: tcp 00:08:11.308 adrfam: ipv4 00:08:11.308 subtype: nvme subsystem 00:08:11.308 treq: not required 00:08:11.308 portid: 0 00:08:11.308 trsvcid: 4420 00:08:11.308 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:11.308 traddr: 10.0.0.2 00:08:11.308 eflags: none 00:08:11.308 sectype: none 00:08:11.308 =====Discovery Log Entry 5====== 00:08:11.308 trtype: tcp 00:08:11.308 adrfam: ipv4 00:08:11.308 subtype: discovery subsystem referral 00:08:11.308 treq: not required 00:08:11.308 portid: 0 00:08:11.308 trsvcid: 4430 00:08:11.308 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:11.308 traddr: 10.0.0.2 00:08:11.308 eflags: none 00:08:11.308 sectype: none 00:08:11.308 Perform nvmf subsystem discovery via RPC 00:08:11.308 22:28:11 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:11.308 22:28:11 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:11.308 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.308 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.308 [2024-11-20 22:28:11.815110] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:11.308 [ 00:08:11.308 { 00:08:11.308 "allow_any_host": true, 00:08:11.308 "hosts": [], 00:08:11.308 "listen_addresses": [ 00:08:11.308 { 00:08:11.308 "adrfam": "IPv4", 00:08:11.308 "traddr": "10.0.0.2", 00:08:11.308 "transport": "TCP", 00:08:11.308 "trsvcid": "4420", 00:08:11.308 "trtype": "TCP" 00:08:11.308 } 00:08:11.308 ], 00:08:11.308 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:11.308 "subtype": "Discovery" 00:08:11.308 }, 00:08:11.308 { 00:08:11.308 "allow_any_host": true, 00:08:11.308 "hosts": [], 00:08:11.308 "listen_addresses": [ 00:08:11.308 { 00:08:11.308 "adrfam": "IPv4", 00:08:11.308 "traddr": "10.0.0.2", 00:08:11.308 "transport": "TCP", 00:08:11.308 "trsvcid": "4420", 00:08:11.308 "trtype": "TCP" 00:08:11.308 } 00:08:11.308 ], 00:08:11.308 "max_cntlid": 65519, 00:08:11.308 "max_namespaces": 32, 00:08:11.308 "min_cntlid": 1, 00:08:11.308 "model_number": "SPDK bdev Controller", 00:08:11.308 "namespaces": [ 00:08:11.308 { 00:08:11.308 "bdev_name": "Null1", 00:08:11.308 "name": "Null1", 00:08:11.308 "nguid": "201F29FA221549B38FD159DFB5E40BCD", 00:08:11.308 "nsid": 1, 00:08:11.308 "uuid": "201f29fa-2215-49b3-8fd1-59dfb5e40bcd" 00:08:11.308 } 00:08:11.308 ], 00:08:11.308 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:11.308 "serial_number": "SPDK00000000000001", 00:08:11.308 "subtype": "NVMe" 00:08:11.308 }, 00:08:11.308 { 00:08:11.308 "allow_any_host": true, 00:08:11.308 "hosts": [], 00:08:11.308 "listen_addresses": [ 00:08:11.308 { 00:08:11.308 "adrfam": "IPv4", 00:08:11.308 "traddr": "10.0.0.2", 00:08:11.308 "transport": "TCP", 00:08:11.308 "trsvcid": "4420", 00:08:11.308 "trtype": "TCP" 00:08:11.308 } 00:08:11.308 ], 00:08:11.308 "max_cntlid": 65519, 00:08:11.308 "max_namespaces": 32, 00:08:11.308 "min_cntlid": 1, 00:08:11.308 "model_number": "SPDK bdev Controller", 00:08:11.308 "namespaces": [ 00:08:11.308 { 00:08:11.308 "bdev_name": "Null2", 00:08:11.308 "name": "Null2", 00:08:11.308 "nguid": "21C16697798C4E0680FD54BB8CAAE9DD", 00:08:11.308 "nsid": 1, 00:08:11.308 "uuid": "21c16697-798c-4e06-80fd-54bb8caae9dd" 00:08:11.308 } 00:08:11.308 ], 00:08:11.308 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:11.308 "serial_number": "SPDK00000000000002", 00:08:11.308 "subtype": "NVMe" 00:08:11.308 }, 00:08:11.308 { 00:08:11.308 "allow_any_host": true, 00:08:11.308 "hosts": [], 00:08:11.308 "listen_addresses": [ 00:08:11.308 { 00:08:11.308 "adrfam": "IPv4", 00:08:11.308 "traddr": "10.0.0.2", 00:08:11.308 "transport": "TCP", 00:08:11.308 "trsvcid": "4420", 00:08:11.308 "trtype": "TCP" 00:08:11.308 } 00:08:11.308 ], 00:08:11.308 "max_cntlid": 65519, 00:08:11.308 "max_namespaces": 32, 00:08:11.308 "min_cntlid": 1, 00:08:11.308 "model_number": "SPDK bdev Controller", 00:08:11.308 "namespaces": [ 00:08:11.308 { 00:08:11.308 "bdev_name": "Null3", 00:08:11.308 "name": "Null3", 00:08:11.308 "nguid": "5D7538BEC5674172A6EA5B357CD7ECF2", 00:08:11.308 "nsid": 1, 00:08:11.308 "uuid": "5d7538be-c567-4172-a6ea-5b357cd7ecf2" 00:08:11.308 } 00:08:11.308 ], 00:08:11.308 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:11.308 "serial_number": "SPDK00000000000003", 00:08:11.308 "subtype": "NVMe" 00:08:11.308 }, 00:08:11.308 { 00:08:11.308 "allow_any_host": true, 00:08:11.308 "hosts": [], 00:08:11.308 "listen_addresses": [ 00:08:11.308 { 00:08:11.308 "adrfam": "IPv4", 00:08:11.308 "traddr": "10.0.0.2", 00:08:11.308 "transport": "TCP", 00:08:11.308 "trsvcid": "4420", 00:08:11.308 "trtype": "TCP" 00:08:11.308 } 00:08:11.308 ], 00:08:11.308 "max_cntlid": 65519, 00:08:11.308 "max_namespaces": 32, 00:08:11.308 "min_cntlid": 1, 00:08:11.308 "model_number": "SPDK bdev Controller", 00:08:11.308 "namespaces": [ 00:08:11.308 { 00:08:11.308 "bdev_name": "Null4", 00:08:11.308 "name": "Null4", 00:08:11.308 "nguid": "4451A5C602EC4A1B812C4347273FDB20", 00:08:11.308 "nsid": 1, 00:08:11.308 "uuid": "4451a5c6-02ec-4a1b-812c-4347273fdb20" 00:08:11.308 } 00:08:11.308 ], 00:08:11.308 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:11.308 "serial_number": "SPDK00000000000004", 00:08:11.308 "subtype": "NVMe" 00:08:11.308 } 00:08:11.308 ] 00:08:11.309 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.309 22:28:11 -- target/discovery.sh@42 -- # seq 1 4 00:08:11.309 22:28:11 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:11.309 22:28:11 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:11.309 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.309 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.309 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.309 22:28:11 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:11.309 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.309 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.309 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.309 22:28:11 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:11.309 22:28:11 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:11.309 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.309 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.309 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.309 22:28:11 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:11.309 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.309 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.309 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.309 22:28:11 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:11.309 22:28:11 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:11.309 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.309 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.309 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.309 22:28:11 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:11.309 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.309 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.309 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.309 22:28:11 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:11.309 22:28:11 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:11.309 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.309 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.309 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.309 22:28:11 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:11.309 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.309 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.309 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.309 22:28:11 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:11.309 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.309 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.309 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.309 22:28:11 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:11.309 22:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.309 22:28:11 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:11.309 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.309 22:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.309 22:28:11 -- target/discovery.sh@49 -- # check_bdevs= 00:08:11.309 22:28:11 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:11.309 22:28:11 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:11.309 22:28:11 -- target/discovery.sh@57 -- # nvmftestfini 00:08:11.309 22:28:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:11.309 22:28:11 -- nvmf/common.sh@116 -- # sync 00:08:11.309 22:28:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:11.309 22:28:11 -- nvmf/common.sh@119 -- # set +e 00:08:11.309 22:28:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:11.309 22:28:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:11.309 rmmod nvme_tcp 00:08:11.309 rmmod nvme_fabrics 00:08:11.309 rmmod nvme_keyring 00:08:11.567 22:28:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:11.567 22:28:12 -- nvmf/common.sh@123 -- # set -e 00:08:11.567 22:28:12 -- nvmf/common.sh@124 -- # return 0 00:08:11.567 22:28:12 -- nvmf/common.sh@477 -- # '[' -n 73193 ']' 00:08:11.567 22:28:12 -- nvmf/common.sh@478 -- # killprocess 73193 00:08:11.567 22:28:12 -- common/autotest_common.sh@936 -- # '[' -z 73193 ']' 00:08:11.567 22:28:12 -- common/autotest_common.sh@940 -- # kill -0 73193 00:08:11.567 22:28:12 -- common/autotest_common.sh@941 -- # uname 00:08:11.567 22:28:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:11.567 22:28:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73193 00:08:11.567 22:28:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:11.567 killing process with pid 73193 00:08:11.567 22:28:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:11.567 22:28:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73193' 00:08:11.567 22:28:12 -- common/autotest_common.sh@955 -- # kill 73193 00:08:11.567 [2024-11-20 22:28:12.086604] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:11.567 22:28:12 -- common/autotest_common.sh@960 -- # wait 73193 00:08:11.826 22:28:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:11.826 22:28:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:11.826 22:28:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:11.826 22:28:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:11.826 22:28:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:11.826 22:28:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.826 22:28:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.826 22:28:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.826 22:28:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:11.826 00:08:11.826 real 0m2.544s 00:08:11.826 user 0m6.875s 00:08:11.826 sys 0m0.665s 00:08:11.826 22:28:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:11.826 ************************************ 00:08:11.826 END TEST nvmf_discovery 00:08:11.826 22:28:12 -- common/autotest_common.sh@10 -- # set +x 00:08:11.826 ************************************ 00:08:11.826 22:28:12 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:11.826 22:28:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:11.826 22:28:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.826 22:28:12 -- common/autotest_common.sh@10 -- # set +x 00:08:11.826 ************************************ 00:08:11.826 START TEST nvmf_referrals 00:08:11.826 ************************************ 00:08:11.826 22:28:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:11.826 * Looking for test storage... 00:08:11.826 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:12.086 22:28:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:12.086 22:28:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:12.086 22:28:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:12.086 22:28:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:12.086 22:28:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:12.086 22:28:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:12.086 22:28:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:12.086 22:28:12 -- scripts/common.sh@335 -- # IFS=.-: 00:08:12.086 22:28:12 -- scripts/common.sh@335 -- # read -ra ver1 00:08:12.086 22:28:12 -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.086 22:28:12 -- scripts/common.sh@336 -- # read -ra ver2 00:08:12.086 22:28:12 -- scripts/common.sh@337 -- # local 'op=<' 00:08:12.086 22:28:12 -- scripts/common.sh@339 -- # ver1_l=2 00:08:12.086 22:28:12 -- scripts/common.sh@340 -- # ver2_l=1 00:08:12.086 22:28:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:12.086 22:28:12 -- scripts/common.sh@343 -- # case "$op" in 00:08:12.086 22:28:12 -- scripts/common.sh@344 -- # : 1 00:08:12.086 22:28:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:12.086 22:28:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.086 22:28:12 -- scripts/common.sh@364 -- # decimal 1 00:08:12.086 22:28:12 -- scripts/common.sh@352 -- # local d=1 00:08:12.086 22:28:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.086 22:28:12 -- scripts/common.sh@354 -- # echo 1 00:08:12.086 22:28:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:12.086 22:28:12 -- scripts/common.sh@365 -- # decimal 2 00:08:12.086 22:28:12 -- scripts/common.sh@352 -- # local d=2 00:08:12.086 22:28:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.086 22:28:12 -- scripts/common.sh@354 -- # echo 2 00:08:12.086 22:28:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:12.086 22:28:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:12.086 22:28:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:12.086 22:28:12 -- scripts/common.sh@367 -- # return 0 00:08:12.086 22:28:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.086 22:28:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:12.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.086 --rc genhtml_branch_coverage=1 00:08:12.086 --rc genhtml_function_coverage=1 00:08:12.086 --rc genhtml_legend=1 00:08:12.086 --rc geninfo_all_blocks=1 00:08:12.086 --rc geninfo_unexecuted_blocks=1 00:08:12.086 00:08:12.086 ' 00:08:12.086 22:28:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:12.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.086 --rc genhtml_branch_coverage=1 00:08:12.086 --rc genhtml_function_coverage=1 00:08:12.086 --rc genhtml_legend=1 00:08:12.086 --rc geninfo_all_blocks=1 00:08:12.086 --rc geninfo_unexecuted_blocks=1 00:08:12.086 00:08:12.086 ' 00:08:12.086 22:28:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:12.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.086 --rc genhtml_branch_coverage=1 00:08:12.086 --rc genhtml_function_coverage=1 00:08:12.086 --rc genhtml_legend=1 00:08:12.086 --rc geninfo_all_blocks=1 00:08:12.086 --rc geninfo_unexecuted_blocks=1 00:08:12.086 00:08:12.086 ' 00:08:12.086 22:28:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:12.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.086 --rc genhtml_branch_coverage=1 00:08:12.086 --rc genhtml_function_coverage=1 00:08:12.086 --rc genhtml_legend=1 00:08:12.086 --rc geninfo_all_blocks=1 00:08:12.086 --rc geninfo_unexecuted_blocks=1 00:08:12.086 00:08:12.086 ' 00:08:12.086 22:28:12 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:12.086 22:28:12 -- nvmf/common.sh@7 -- # uname -s 00:08:12.086 22:28:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.086 22:28:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.086 22:28:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.086 22:28:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.086 22:28:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.086 22:28:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.086 22:28:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.086 22:28:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.086 22:28:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.086 22:28:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.086 22:28:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:08:12.086 22:28:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:08:12.086 22:28:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.086 22:28:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.086 22:28:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:12.086 22:28:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:12.086 22:28:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.086 22:28:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.086 22:28:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.086 22:28:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.086 22:28:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.086 22:28:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.086 22:28:12 -- paths/export.sh@5 -- # export PATH 00:08:12.086 22:28:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.086 22:28:12 -- nvmf/common.sh@46 -- # : 0 00:08:12.086 22:28:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:12.086 22:28:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:12.086 22:28:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:12.086 22:28:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.086 22:28:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.086 22:28:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:12.086 22:28:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:12.087 22:28:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:12.087 22:28:12 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:12.087 22:28:12 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:12.087 22:28:12 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:12.087 22:28:12 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:12.087 22:28:12 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:12.087 22:28:12 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:12.087 22:28:12 -- target/referrals.sh@37 -- # nvmftestinit 00:08:12.087 22:28:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:12.087 22:28:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.087 22:28:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:12.087 22:28:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:12.087 22:28:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:12.087 22:28:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.087 22:28:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:12.087 22:28:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.087 22:28:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:12.087 22:28:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:12.087 22:28:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:12.087 22:28:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:12.087 22:28:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:12.087 22:28:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:12.087 22:28:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.087 22:28:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:12.087 22:28:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:12.087 22:28:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:12.087 22:28:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:12.087 22:28:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:12.087 22:28:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:12.087 22:28:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.087 22:28:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:12.087 22:28:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:12.087 22:28:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:12.087 22:28:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:12.087 22:28:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:12.087 22:28:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:12.087 Cannot find device "nvmf_tgt_br" 00:08:12.087 22:28:12 -- nvmf/common.sh@154 -- # true 00:08:12.087 22:28:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:12.087 Cannot find device "nvmf_tgt_br2" 00:08:12.087 22:28:12 -- nvmf/common.sh@155 -- # true 00:08:12.087 22:28:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:12.087 22:28:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:12.087 Cannot find device "nvmf_tgt_br" 00:08:12.087 22:28:12 -- nvmf/common.sh@157 -- # true 00:08:12.087 22:28:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:12.087 Cannot find device "nvmf_tgt_br2" 00:08:12.087 22:28:12 -- nvmf/common.sh@158 -- # true 00:08:12.087 22:28:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:12.087 22:28:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:12.087 22:28:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:12.087 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:12.087 22:28:12 -- nvmf/common.sh@161 -- # true 00:08:12.087 22:28:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:12.087 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:12.087 22:28:12 -- nvmf/common.sh@162 -- # true 00:08:12.087 22:28:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:12.087 22:28:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:12.087 22:28:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:12.345 22:28:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:12.345 22:28:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:12.345 22:28:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:12.345 22:28:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:12.345 22:28:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:12.345 22:28:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:12.346 22:28:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:12.346 22:28:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:12.346 22:28:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:12.346 22:28:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:12.346 22:28:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:12.346 22:28:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:12.346 22:28:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:12.346 22:28:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:12.346 22:28:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:12.346 22:28:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:12.346 22:28:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:12.346 22:28:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:12.346 22:28:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:12.346 22:28:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:12.346 22:28:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:12.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:08:12.346 00:08:12.346 --- 10.0.0.2 ping statistics --- 00:08:12.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.346 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:08:12.346 22:28:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:12.346 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:12.346 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:08:12.346 00:08:12.346 --- 10.0.0.3 ping statistics --- 00:08:12.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.346 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:08:12.346 22:28:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:12.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:08:12.346 00:08:12.346 --- 10.0.0.1 ping statistics --- 00:08:12.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.346 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:08:12.346 22:28:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.346 22:28:12 -- nvmf/common.sh@421 -- # return 0 00:08:12.346 22:28:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:12.346 22:28:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.346 22:28:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:12.346 22:28:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:12.346 22:28:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.346 22:28:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:12.346 22:28:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:12.346 22:28:13 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:12.346 22:28:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:12.346 22:28:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:12.346 22:28:13 -- common/autotest_common.sh@10 -- # set +x 00:08:12.346 22:28:13 -- nvmf/common.sh@469 -- # nvmfpid=73429 00:08:12.346 22:28:13 -- nvmf/common.sh@470 -- # waitforlisten 73429 00:08:12.346 22:28:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:12.346 22:28:13 -- common/autotest_common.sh@829 -- # '[' -z 73429 ']' 00:08:12.346 22:28:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.346 22:28:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:12.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.346 22:28:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.346 22:28:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:12.346 22:28:13 -- common/autotest_common.sh@10 -- # set +x 00:08:12.346 [2024-11-20 22:28:13.072549] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:12.346 [2024-11-20 22:28:13.073042] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.604 [2024-11-20 22:28:13.211762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.604 [2024-11-20 22:28:13.295846] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:12.604 [2024-11-20 22:28:13.296054] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.604 [2024-11-20 22:28:13.296071] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.604 [2024-11-20 22:28:13.296086] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.604 [2024-11-20 22:28:13.297331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.604 [2024-11-20 22:28:13.297484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.604 [2024-11-20 22:28:13.297613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.604 [2024-11-20 22:28:13.297640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.541 22:28:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:13.541 22:28:14 -- common/autotest_common.sh@862 -- # return 0 00:08:13.541 22:28:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:13.541 22:28:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:13.541 22:28:14 -- common/autotest_common.sh@10 -- # set +x 00:08:13.541 22:28:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.541 22:28:14 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:13.541 22:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.541 22:28:14 -- common/autotest_common.sh@10 -- # set +x 00:08:13.541 [2024-11-20 22:28:14.172222] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.541 22:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.541 22:28:14 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:13.541 22:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.541 22:28:14 -- common/autotest_common.sh@10 -- # set +x 00:08:13.542 [2024-11-20 22:28:14.199935] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:13.542 22:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.542 22:28:14 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:13.542 22:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.542 22:28:14 -- common/autotest_common.sh@10 -- # set +x 00:08:13.542 22:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.542 22:28:14 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:13.542 22:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.542 22:28:14 -- common/autotest_common.sh@10 -- # set +x 00:08:13.542 22:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.542 22:28:14 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:13.542 22:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.542 22:28:14 -- common/autotest_common.sh@10 -- # set +x 00:08:13.542 22:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.542 22:28:14 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.542 22:28:14 -- target/referrals.sh@48 -- # jq length 00:08:13.542 22:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.542 22:28:14 -- common/autotest_common.sh@10 -- # set +x 00:08:13.542 22:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.814 22:28:14 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:13.814 22:28:14 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:13.814 22:28:14 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:13.814 22:28:14 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.814 22:28:14 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:13.814 22:28:14 -- target/referrals.sh@21 -- # sort 00:08:13.814 22:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.814 22:28:14 -- common/autotest_common.sh@10 -- # set +x 00:08:13.814 22:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.814 22:28:14 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:13.814 22:28:14 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:13.814 22:28:14 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:13.814 22:28:14 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:13.814 22:28:14 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:13.814 22:28:14 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.814 22:28:14 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:13.814 22:28:14 -- target/referrals.sh@26 -- # sort 00:08:13.814 22:28:14 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:13.814 22:28:14 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:13.814 22:28:14 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:13.814 22:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.814 22:28:14 -- common/autotest_common.sh@10 -- # set +x 00:08:13.814 22:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.814 22:28:14 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:13.814 22:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.814 22:28:14 -- common/autotest_common.sh@10 -- # set +x 00:08:13.814 22:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.814 22:28:14 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:13.814 22:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.814 22:28:14 -- common/autotest_common.sh@10 -- # set +x 00:08:13.814 22:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.814 22:28:14 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.814 22:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.814 22:28:14 -- target/referrals.sh@56 -- # jq length 00:08:13.814 22:28:14 -- common/autotest_common.sh@10 -- # set +x 00:08:13.814 22:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.073 22:28:14 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:14.073 22:28:14 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:14.073 22:28:14 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:14.073 22:28:14 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:14.073 22:28:14 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:14.073 22:28:14 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:14.073 22:28:14 -- target/referrals.sh@26 -- # sort 00:08:14.073 22:28:14 -- target/referrals.sh@26 -- # echo 00:08:14.073 22:28:14 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:14.073 22:28:14 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:14.073 22:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.073 22:28:14 -- common/autotest_common.sh@10 -- # set +x 00:08:14.073 22:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.073 22:28:14 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:14.073 22:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.073 22:28:14 -- common/autotest_common.sh@10 -- # set +x 00:08:14.073 22:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.073 22:28:14 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:14.073 22:28:14 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:14.073 22:28:14 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:14.073 22:28:14 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:14.073 22:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.073 22:28:14 -- common/autotest_common.sh@10 -- # set +x 00:08:14.073 22:28:14 -- target/referrals.sh@21 -- # sort 00:08:14.073 22:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.073 22:28:14 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:14.073 22:28:14 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:14.073 22:28:14 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:14.073 22:28:14 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:14.073 22:28:14 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:14.073 22:28:14 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:14.073 22:28:14 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:14.073 22:28:14 -- target/referrals.sh@26 -- # sort 00:08:14.332 22:28:14 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:14.332 22:28:14 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:14.332 22:28:14 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:14.332 22:28:14 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:14.332 22:28:14 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:14.332 22:28:14 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:14.332 22:28:14 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:14.332 22:28:15 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:14.332 22:28:15 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:14.332 22:28:15 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:14.332 22:28:15 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:14.332 22:28:15 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:14.332 22:28:15 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:14.591 22:28:15 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:14.591 22:28:15 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:14.591 22:28:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.591 22:28:15 -- common/autotest_common.sh@10 -- # set +x 00:08:14.591 22:28:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.591 22:28:15 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:14.591 22:28:15 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:14.591 22:28:15 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:14.591 22:28:15 -- target/referrals.sh@21 -- # sort 00:08:14.591 22:28:15 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:14.591 22:28:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.591 22:28:15 -- common/autotest_common.sh@10 -- # set +x 00:08:14.591 22:28:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.591 22:28:15 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:14.591 22:28:15 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:14.591 22:28:15 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:14.591 22:28:15 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:14.591 22:28:15 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:14.591 22:28:15 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:14.591 22:28:15 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:14.591 22:28:15 -- target/referrals.sh@26 -- # sort 00:08:14.591 22:28:15 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:14.591 22:28:15 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:14.591 22:28:15 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:14.591 22:28:15 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:14.591 22:28:15 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:14.591 22:28:15 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:14.591 22:28:15 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:14.849 22:28:15 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:14.850 22:28:15 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:14.850 22:28:15 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:14.850 22:28:15 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:14.850 22:28:15 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:14.850 22:28:15 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:14.850 22:28:15 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:14.850 22:28:15 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:14.850 22:28:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.850 22:28:15 -- common/autotest_common.sh@10 -- # set +x 00:08:14.850 22:28:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.850 22:28:15 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:14.850 22:28:15 -- target/referrals.sh@82 -- # jq length 00:08:14.850 22:28:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.850 22:28:15 -- common/autotest_common.sh@10 -- # set +x 00:08:14.850 22:28:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.109 22:28:15 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:15.109 22:28:15 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:15.109 22:28:15 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:15.109 22:28:15 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:15.109 22:28:15 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.109 22:28:15 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:15.109 22:28:15 -- target/referrals.sh@26 -- # sort 00:08:15.109 22:28:15 -- target/referrals.sh@26 -- # echo 00:08:15.109 22:28:15 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:15.109 22:28:15 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:15.109 22:28:15 -- target/referrals.sh@86 -- # nvmftestfini 00:08:15.109 22:28:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:15.109 22:28:15 -- nvmf/common.sh@116 -- # sync 00:08:15.109 22:28:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:15.109 22:28:15 -- nvmf/common.sh@119 -- # set +e 00:08:15.109 22:28:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:15.109 22:28:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:15.109 rmmod nvme_tcp 00:08:15.109 rmmod nvme_fabrics 00:08:15.368 rmmod nvme_keyring 00:08:15.368 22:28:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:15.368 22:28:15 -- nvmf/common.sh@123 -- # set -e 00:08:15.368 22:28:15 -- nvmf/common.sh@124 -- # return 0 00:08:15.368 22:28:15 -- nvmf/common.sh@477 -- # '[' -n 73429 ']' 00:08:15.368 22:28:15 -- nvmf/common.sh@478 -- # killprocess 73429 00:08:15.368 22:28:15 -- common/autotest_common.sh@936 -- # '[' -z 73429 ']' 00:08:15.368 22:28:15 -- common/autotest_common.sh@940 -- # kill -0 73429 00:08:15.368 22:28:15 -- common/autotest_common.sh@941 -- # uname 00:08:15.368 22:28:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:15.368 22:28:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73429 00:08:15.368 22:28:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:15.368 22:28:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:15.368 killing process with pid 73429 00:08:15.368 22:28:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73429' 00:08:15.368 22:28:15 -- common/autotest_common.sh@955 -- # kill 73429 00:08:15.368 22:28:15 -- common/autotest_common.sh@960 -- # wait 73429 00:08:15.627 22:28:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:15.627 22:28:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:15.627 22:28:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:15.627 22:28:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:15.627 22:28:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:15.627 22:28:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.627 22:28:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.627 22:28:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.627 22:28:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:15.627 00:08:15.627 real 0m3.698s 00:08:15.627 user 0m12.405s 00:08:15.627 sys 0m0.989s 00:08:15.627 22:28:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:15.627 22:28:16 -- common/autotest_common.sh@10 -- # set +x 00:08:15.627 ************************************ 00:08:15.627 END TEST nvmf_referrals 00:08:15.627 ************************************ 00:08:15.627 22:28:16 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:15.627 22:28:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:15.627 22:28:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.627 22:28:16 -- common/autotest_common.sh@10 -- # set +x 00:08:15.627 ************************************ 00:08:15.627 START TEST nvmf_connect_disconnect 00:08:15.627 ************************************ 00:08:15.627 22:28:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:15.627 * Looking for test storage... 00:08:15.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:15.627 22:28:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:15.627 22:28:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:15.627 22:28:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:15.886 22:28:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:15.886 22:28:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:15.886 22:28:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:15.886 22:28:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:15.886 22:28:16 -- scripts/common.sh@335 -- # IFS=.-: 00:08:15.886 22:28:16 -- scripts/common.sh@335 -- # read -ra ver1 00:08:15.886 22:28:16 -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.886 22:28:16 -- scripts/common.sh@336 -- # read -ra ver2 00:08:15.886 22:28:16 -- scripts/common.sh@337 -- # local 'op=<' 00:08:15.886 22:28:16 -- scripts/common.sh@339 -- # ver1_l=2 00:08:15.886 22:28:16 -- scripts/common.sh@340 -- # ver2_l=1 00:08:15.886 22:28:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:15.886 22:28:16 -- scripts/common.sh@343 -- # case "$op" in 00:08:15.886 22:28:16 -- scripts/common.sh@344 -- # : 1 00:08:15.886 22:28:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:15.886 22:28:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.886 22:28:16 -- scripts/common.sh@364 -- # decimal 1 00:08:15.886 22:28:16 -- scripts/common.sh@352 -- # local d=1 00:08:15.886 22:28:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.886 22:28:16 -- scripts/common.sh@354 -- # echo 1 00:08:15.886 22:28:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:15.886 22:28:16 -- scripts/common.sh@365 -- # decimal 2 00:08:15.886 22:28:16 -- scripts/common.sh@352 -- # local d=2 00:08:15.886 22:28:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.886 22:28:16 -- scripts/common.sh@354 -- # echo 2 00:08:15.886 22:28:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:15.886 22:28:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:15.886 22:28:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:15.886 22:28:16 -- scripts/common.sh@367 -- # return 0 00:08:15.886 22:28:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.886 22:28:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:15.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.886 --rc genhtml_branch_coverage=1 00:08:15.886 --rc genhtml_function_coverage=1 00:08:15.886 --rc genhtml_legend=1 00:08:15.886 --rc geninfo_all_blocks=1 00:08:15.886 --rc geninfo_unexecuted_blocks=1 00:08:15.886 00:08:15.886 ' 00:08:15.886 22:28:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:15.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.886 --rc genhtml_branch_coverage=1 00:08:15.886 --rc genhtml_function_coverage=1 00:08:15.886 --rc genhtml_legend=1 00:08:15.886 --rc geninfo_all_blocks=1 00:08:15.886 --rc geninfo_unexecuted_blocks=1 00:08:15.886 00:08:15.886 ' 00:08:15.886 22:28:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:15.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.886 --rc genhtml_branch_coverage=1 00:08:15.886 --rc genhtml_function_coverage=1 00:08:15.886 --rc genhtml_legend=1 00:08:15.886 --rc geninfo_all_blocks=1 00:08:15.886 --rc geninfo_unexecuted_blocks=1 00:08:15.886 00:08:15.886 ' 00:08:15.886 22:28:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:15.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.886 --rc genhtml_branch_coverage=1 00:08:15.886 --rc genhtml_function_coverage=1 00:08:15.886 --rc genhtml_legend=1 00:08:15.886 --rc geninfo_all_blocks=1 00:08:15.886 --rc geninfo_unexecuted_blocks=1 00:08:15.886 00:08:15.886 ' 00:08:15.886 22:28:16 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:15.886 22:28:16 -- nvmf/common.sh@7 -- # uname -s 00:08:15.886 22:28:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.886 22:28:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.886 22:28:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.886 22:28:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.886 22:28:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.886 22:28:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.886 22:28:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.886 22:28:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.886 22:28:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.886 22:28:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.887 22:28:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:08:15.887 22:28:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:08:15.887 22:28:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.887 22:28:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.887 22:28:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:15.887 22:28:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:15.887 22:28:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.887 22:28:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.887 22:28:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.887 22:28:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.887 22:28:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.887 22:28:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.887 22:28:16 -- paths/export.sh@5 -- # export PATH 00:08:15.887 22:28:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.887 22:28:16 -- nvmf/common.sh@46 -- # : 0 00:08:15.887 22:28:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:15.887 22:28:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:15.887 22:28:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:15.887 22:28:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.887 22:28:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.887 22:28:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:15.887 22:28:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:15.887 22:28:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:15.887 22:28:16 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:15.887 22:28:16 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:15.887 22:28:16 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:15.887 22:28:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:15.887 22:28:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.887 22:28:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:15.887 22:28:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:15.887 22:28:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:15.887 22:28:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.887 22:28:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.887 22:28:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.887 22:28:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:15.887 22:28:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:15.887 22:28:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:15.887 22:28:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:15.887 22:28:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:15.887 22:28:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:15.887 22:28:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.887 22:28:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.887 22:28:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:15.887 22:28:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:15.887 22:28:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:15.887 22:28:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:15.887 22:28:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:15.887 22:28:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.887 22:28:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:15.887 22:28:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:15.887 22:28:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:15.887 22:28:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:15.887 22:28:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:15.887 22:28:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:15.887 Cannot find device "nvmf_tgt_br" 00:08:15.887 22:28:16 -- nvmf/common.sh@154 -- # true 00:08:15.887 22:28:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:15.887 Cannot find device "nvmf_tgt_br2" 00:08:15.887 22:28:16 -- nvmf/common.sh@155 -- # true 00:08:15.887 22:28:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:15.887 22:28:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:15.887 Cannot find device "nvmf_tgt_br" 00:08:15.887 22:28:16 -- nvmf/common.sh@157 -- # true 00:08:15.887 22:28:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:15.887 Cannot find device "nvmf_tgt_br2" 00:08:15.887 22:28:16 -- nvmf/common.sh@158 -- # true 00:08:15.887 22:28:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:15.887 22:28:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:15.887 22:28:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:15.887 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:15.887 22:28:16 -- nvmf/common.sh@161 -- # true 00:08:15.887 22:28:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:15.887 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:15.887 22:28:16 -- nvmf/common.sh@162 -- # true 00:08:15.887 22:28:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:15.887 22:28:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:15.887 22:28:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:15.887 22:28:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:15.887 22:28:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:15.887 22:28:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:15.887 22:28:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:15.887 22:28:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:15.887 22:28:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:15.887 22:28:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:15.887 22:28:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:15.887 22:28:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:15.887 22:28:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:16.146 22:28:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:16.146 22:28:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:16.146 22:28:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:16.146 22:28:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:16.146 22:28:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:16.146 22:28:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:16.146 22:28:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:16.146 22:28:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:16.146 22:28:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:16.146 22:28:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:16.146 22:28:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:16.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:08:16.146 00:08:16.146 --- 10.0.0.2 ping statistics --- 00:08:16.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.146 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:16.146 22:28:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:16.146 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:16.146 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:08:16.146 00:08:16.146 --- 10.0.0.3 ping statistics --- 00:08:16.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.146 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:16.146 22:28:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:16.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:08:16.146 00:08:16.146 --- 10.0.0.1 ping statistics --- 00:08:16.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.146 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:08:16.146 22:28:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.146 22:28:16 -- nvmf/common.sh@421 -- # return 0 00:08:16.146 22:28:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:16.146 22:28:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.146 22:28:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:16.146 22:28:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:16.146 22:28:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.146 22:28:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:16.146 22:28:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:16.146 22:28:16 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:16.146 22:28:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:16.147 22:28:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:16.147 22:28:16 -- common/autotest_common.sh@10 -- # set +x 00:08:16.147 22:28:16 -- nvmf/common.sh@469 -- # nvmfpid=73744 00:08:16.147 22:28:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:16.147 22:28:16 -- nvmf/common.sh@470 -- # waitforlisten 73744 00:08:16.147 22:28:16 -- common/autotest_common.sh@829 -- # '[' -z 73744 ']' 00:08:16.147 22:28:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.147 22:28:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:16.147 22:28:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.147 22:28:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:16.147 22:28:16 -- common/autotest_common.sh@10 -- # set +x 00:08:16.147 [2024-11-20 22:28:16.799440] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:16.147 [2024-11-20 22:28:16.799539] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.405 [2024-11-20 22:28:16.943798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:16.405 [2024-11-20 22:28:17.028913] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:16.405 [2024-11-20 22:28:17.029113] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.406 [2024-11-20 22:28:17.029129] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.406 [2024-11-20 22:28:17.029141] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.406 [2024-11-20 22:28:17.029310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.406 [2024-11-20 22:28:17.029834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.406 [2024-11-20 22:28:17.029996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.406 [2024-11-20 22:28:17.030013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.341 22:28:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:17.341 22:28:17 -- common/autotest_common.sh@862 -- # return 0 00:08:17.341 22:28:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:17.341 22:28:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:17.341 22:28:17 -- common/autotest_common.sh@10 -- # set +x 00:08:17.341 22:28:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.341 22:28:17 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:17.341 22:28:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.341 22:28:17 -- common/autotest_common.sh@10 -- # set +x 00:08:17.341 [2024-11-20 22:28:17.887267] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.341 22:28:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.341 22:28:17 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:17.341 22:28:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.341 22:28:17 -- common/autotest_common.sh@10 -- # set +x 00:08:17.341 22:28:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.341 22:28:17 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:17.341 22:28:17 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:17.341 22:28:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.341 22:28:17 -- common/autotest_common.sh@10 -- # set +x 00:08:17.341 22:28:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.341 22:28:17 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:17.341 22:28:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.341 22:28:17 -- common/autotest_common.sh@10 -- # set +x 00:08:17.341 22:28:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.341 22:28:17 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:17.341 22:28:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.341 22:28:17 -- common/autotest_common.sh@10 -- # set +x 00:08:17.341 [2024-11-20 22:28:17.963640] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.341 22:28:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.341 22:28:17 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:17.341 22:28:17 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:17.341 22:28:17 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:17.341 22:28:17 -- target/connect_disconnect.sh@34 -- # set +x 00:08:19.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.487 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.296 22:32:03 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:03.296 22:32:03 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:03.296 22:32:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:03.296 22:32:03 -- nvmf/common.sh@116 -- # sync 00:12:03.296 22:32:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:03.296 22:32:03 -- nvmf/common.sh@119 -- # set +e 00:12:03.296 22:32:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:03.296 22:32:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:03.296 rmmod nvme_tcp 00:12:03.296 rmmod nvme_fabrics 00:12:03.296 rmmod nvme_keyring 00:12:03.296 22:32:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:03.296 22:32:03 -- nvmf/common.sh@123 -- # set -e 00:12:03.296 22:32:03 -- nvmf/common.sh@124 -- # return 0 00:12:03.296 22:32:03 -- nvmf/common.sh@477 -- # '[' -n 73744 ']' 00:12:03.296 22:32:03 -- nvmf/common.sh@478 -- # killprocess 73744 00:12:03.296 22:32:03 -- common/autotest_common.sh@936 -- # '[' -z 73744 ']' 00:12:03.296 22:32:03 -- common/autotest_common.sh@940 -- # kill -0 73744 00:12:03.296 22:32:03 -- common/autotest_common.sh@941 -- # uname 00:12:03.296 22:32:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:03.296 22:32:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73744 00:12:03.296 killing process with pid 73744 00:12:03.296 22:32:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:03.296 22:32:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:03.296 22:32:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73744' 00:12:03.296 22:32:03 -- common/autotest_common.sh@955 -- # kill 73744 00:12:03.296 22:32:03 -- common/autotest_common.sh@960 -- # wait 73744 00:12:03.296 22:32:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:03.296 22:32:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:03.296 22:32:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:03.296 22:32:03 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:03.296 22:32:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:03.296 22:32:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.296 22:32:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:03.296 22:32:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.555 22:32:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:03.555 00:12:03.555 real 3m47.808s 00:12:03.555 user 14m52.678s 00:12:03.555 sys 0m17.740s 00:12:03.555 ************************************ 00:12:03.555 END TEST nvmf_connect_disconnect 00:12:03.555 ************************************ 00:12:03.555 22:32:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:03.555 22:32:04 -- common/autotest_common.sh@10 -- # set +x 00:12:03.555 22:32:04 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:03.555 22:32:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:03.555 22:32:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:03.555 22:32:04 -- common/autotest_common.sh@10 -- # set +x 00:12:03.555 ************************************ 00:12:03.555 START TEST nvmf_multitarget 00:12:03.555 ************************************ 00:12:03.555 22:32:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:03.555 * Looking for test storage... 00:12:03.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:03.555 22:32:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:03.555 22:32:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:03.555 22:32:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:03.555 22:32:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:03.555 22:32:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:03.555 22:32:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:03.555 22:32:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:03.555 22:32:04 -- scripts/common.sh@335 -- # IFS=.-: 00:12:03.555 22:32:04 -- scripts/common.sh@335 -- # read -ra ver1 00:12:03.555 22:32:04 -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.555 22:32:04 -- scripts/common.sh@336 -- # read -ra ver2 00:12:03.555 22:32:04 -- scripts/common.sh@337 -- # local 'op=<' 00:12:03.555 22:32:04 -- scripts/common.sh@339 -- # ver1_l=2 00:12:03.555 22:32:04 -- scripts/common.sh@340 -- # ver2_l=1 00:12:03.555 22:32:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:03.555 22:32:04 -- scripts/common.sh@343 -- # case "$op" in 00:12:03.555 22:32:04 -- scripts/common.sh@344 -- # : 1 00:12:03.555 22:32:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:03.555 22:32:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.555 22:32:04 -- scripts/common.sh@364 -- # decimal 1 00:12:03.555 22:32:04 -- scripts/common.sh@352 -- # local d=1 00:12:03.555 22:32:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.555 22:32:04 -- scripts/common.sh@354 -- # echo 1 00:12:03.555 22:32:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:03.555 22:32:04 -- scripts/common.sh@365 -- # decimal 2 00:12:03.555 22:32:04 -- scripts/common.sh@352 -- # local d=2 00:12:03.555 22:32:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.555 22:32:04 -- scripts/common.sh@354 -- # echo 2 00:12:03.555 22:32:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:03.555 22:32:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:03.555 22:32:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:03.555 22:32:04 -- scripts/common.sh@367 -- # return 0 00:12:03.555 22:32:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.555 22:32:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:03.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.555 --rc genhtml_branch_coverage=1 00:12:03.555 --rc genhtml_function_coverage=1 00:12:03.555 --rc genhtml_legend=1 00:12:03.555 --rc geninfo_all_blocks=1 00:12:03.555 --rc geninfo_unexecuted_blocks=1 00:12:03.555 00:12:03.555 ' 00:12:03.555 22:32:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:03.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.555 --rc genhtml_branch_coverage=1 00:12:03.555 --rc genhtml_function_coverage=1 00:12:03.555 --rc genhtml_legend=1 00:12:03.555 --rc geninfo_all_blocks=1 00:12:03.555 --rc geninfo_unexecuted_blocks=1 00:12:03.555 00:12:03.555 ' 00:12:03.555 22:32:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:03.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.555 --rc genhtml_branch_coverage=1 00:12:03.555 --rc genhtml_function_coverage=1 00:12:03.555 --rc genhtml_legend=1 00:12:03.555 --rc geninfo_all_blocks=1 00:12:03.555 --rc geninfo_unexecuted_blocks=1 00:12:03.556 00:12:03.556 ' 00:12:03.556 22:32:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:03.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.556 --rc genhtml_branch_coverage=1 00:12:03.556 --rc genhtml_function_coverage=1 00:12:03.556 --rc genhtml_legend=1 00:12:03.556 --rc geninfo_all_blocks=1 00:12:03.556 --rc geninfo_unexecuted_blocks=1 00:12:03.556 00:12:03.556 ' 00:12:03.556 22:32:04 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:03.556 22:32:04 -- nvmf/common.sh@7 -- # uname -s 00:12:03.556 22:32:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.556 22:32:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.556 22:32:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.556 22:32:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.556 22:32:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.556 22:32:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.556 22:32:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.556 22:32:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.556 22:32:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.556 22:32:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.556 22:32:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:12:03.556 22:32:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:12:03.556 22:32:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.556 22:32:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.556 22:32:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:03.556 22:32:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:03.556 22:32:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.556 22:32:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.556 22:32:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.556 22:32:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.556 22:32:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.556 22:32:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.556 22:32:04 -- paths/export.sh@5 -- # export PATH 00:12:03.556 22:32:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.556 22:32:04 -- nvmf/common.sh@46 -- # : 0 00:12:03.556 22:32:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:03.815 22:32:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:03.815 22:32:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:03.815 22:32:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.815 22:32:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.815 22:32:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:03.815 22:32:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:03.815 22:32:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:03.815 22:32:04 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:03.815 22:32:04 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:03.815 22:32:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:03.815 22:32:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.815 22:32:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:03.815 22:32:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:03.815 22:32:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:03.815 22:32:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.815 22:32:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:03.815 22:32:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.815 22:32:04 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:03.815 22:32:04 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:03.815 22:32:04 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:03.815 22:32:04 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:03.815 22:32:04 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:03.815 22:32:04 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:03.815 22:32:04 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:03.815 22:32:04 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:03.815 22:32:04 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:03.815 22:32:04 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:03.815 22:32:04 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:03.815 22:32:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:03.815 22:32:04 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:03.815 22:32:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:03.815 22:32:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:03.815 22:32:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:03.815 22:32:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:03.815 22:32:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:03.815 22:32:04 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:03.815 22:32:04 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:03.815 Cannot find device "nvmf_tgt_br" 00:12:03.815 22:32:04 -- nvmf/common.sh@154 -- # true 00:12:03.815 22:32:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:03.815 Cannot find device "nvmf_tgt_br2" 00:12:03.815 22:32:04 -- nvmf/common.sh@155 -- # true 00:12:03.815 22:32:04 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:03.815 22:32:04 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:03.815 Cannot find device "nvmf_tgt_br" 00:12:03.815 22:32:04 -- nvmf/common.sh@157 -- # true 00:12:03.815 22:32:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:03.815 Cannot find device "nvmf_tgt_br2" 00:12:03.815 22:32:04 -- nvmf/common.sh@158 -- # true 00:12:03.815 22:32:04 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:03.815 22:32:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:03.815 22:32:04 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:03.815 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:03.815 22:32:04 -- nvmf/common.sh@161 -- # true 00:12:03.816 22:32:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:03.816 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:03.816 22:32:04 -- nvmf/common.sh@162 -- # true 00:12:03.816 22:32:04 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:03.816 22:32:04 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:03.816 22:32:04 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:03.816 22:32:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:03.816 22:32:04 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:03.816 22:32:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:03.816 22:32:04 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:03.816 22:32:04 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:03.816 22:32:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:03.816 22:32:04 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:03.816 22:32:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:03.816 22:32:04 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:03.816 22:32:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:03.816 22:32:04 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:03.816 22:32:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:03.816 22:32:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:03.816 22:32:04 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:03.816 22:32:04 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:03.816 22:32:04 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:04.077 22:32:04 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:04.077 22:32:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:04.077 22:32:04 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:04.077 22:32:04 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:04.077 22:32:04 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:04.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:04.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:12:04.077 00:12:04.077 --- 10.0.0.2 ping statistics --- 00:12:04.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.077 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:12:04.077 22:32:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:04.077 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:04.077 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:12:04.077 00:12:04.077 --- 10.0.0.3 ping statistics --- 00:12:04.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.077 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:12:04.077 22:32:04 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:04.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:04.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:12:04.077 00:12:04.077 --- 10.0.0.1 ping statistics --- 00:12:04.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.077 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:12:04.077 22:32:04 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:04.077 22:32:04 -- nvmf/common.sh@421 -- # return 0 00:12:04.077 22:32:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:04.077 22:32:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:04.077 22:32:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:04.077 22:32:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:04.077 22:32:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:04.077 22:32:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:04.077 22:32:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:04.077 22:32:04 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:04.077 22:32:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:04.077 22:32:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:04.077 22:32:04 -- common/autotest_common.sh@10 -- # set +x 00:12:04.077 22:32:04 -- nvmf/common.sh@469 -- # nvmfpid=77551 00:12:04.077 22:32:04 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:04.077 22:32:04 -- nvmf/common.sh@470 -- # waitforlisten 77551 00:12:04.077 22:32:04 -- common/autotest_common.sh@829 -- # '[' -z 77551 ']' 00:12:04.077 22:32:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.077 22:32:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:04.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.077 22:32:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.077 22:32:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:04.077 22:32:04 -- common/autotest_common.sh@10 -- # set +x 00:12:04.077 [2024-11-20 22:32:04.684347] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:04.077 [2024-11-20 22:32:04.684429] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.336 [2024-11-20 22:32:04.822021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:04.336 [2024-11-20 22:32:04.897211] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:04.336 [2024-11-20 22:32:04.897360] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.336 [2024-11-20 22:32:04.897374] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.336 [2024-11-20 22:32:04.897383] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.336 [2024-11-20 22:32:04.897465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.336 [2024-11-20 22:32:04.898621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.336 [2024-11-20 22:32:04.898776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.336 [2024-11-20 22:32:04.898787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.271 22:32:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:05.271 22:32:05 -- common/autotest_common.sh@862 -- # return 0 00:12:05.271 22:32:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:05.271 22:32:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:05.271 22:32:05 -- common/autotest_common.sh@10 -- # set +x 00:12:05.271 22:32:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.271 22:32:05 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:05.271 22:32:05 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:05.271 22:32:05 -- target/multitarget.sh@21 -- # jq length 00:12:05.271 22:32:05 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:05.271 22:32:05 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:05.529 "nvmf_tgt_1" 00:12:05.529 22:32:06 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:05.529 "nvmf_tgt_2" 00:12:05.529 22:32:06 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:05.529 22:32:06 -- target/multitarget.sh@28 -- # jq length 00:12:05.786 22:32:06 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:05.786 22:32:06 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:05.786 true 00:12:05.786 22:32:06 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:06.044 true 00:12:06.044 22:32:06 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:06.044 22:32:06 -- target/multitarget.sh@35 -- # jq length 00:12:06.044 22:32:06 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:06.044 22:32:06 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:06.044 22:32:06 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:06.044 22:32:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:06.044 22:32:06 -- nvmf/common.sh@116 -- # sync 00:12:06.044 22:32:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:06.044 22:32:06 -- nvmf/common.sh@119 -- # set +e 00:12:06.044 22:32:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:06.044 22:32:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:06.044 rmmod nvme_tcp 00:12:06.303 rmmod nvme_fabrics 00:12:06.303 rmmod nvme_keyring 00:12:06.303 22:32:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:06.303 22:32:06 -- nvmf/common.sh@123 -- # set -e 00:12:06.303 22:32:06 -- nvmf/common.sh@124 -- # return 0 00:12:06.303 22:32:06 -- nvmf/common.sh@477 -- # '[' -n 77551 ']' 00:12:06.303 22:32:06 -- nvmf/common.sh@478 -- # killprocess 77551 00:12:06.303 22:32:06 -- common/autotest_common.sh@936 -- # '[' -z 77551 ']' 00:12:06.303 22:32:06 -- common/autotest_common.sh@940 -- # kill -0 77551 00:12:06.303 22:32:06 -- common/autotest_common.sh@941 -- # uname 00:12:06.303 22:32:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:06.303 22:32:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77551 00:12:06.303 killing process with pid 77551 00:12:06.303 22:32:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:06.303 22:32:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:06.303 22:32:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77551' 00:12:06.303 22:32:06 -- common/autotest_common.sh@955 -- # kill 77551 00:12:06.303 22:32:06 -- common/autotest_common.sh@960 -- # wait 77551 00:12:06.562 22:32:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:06.562 22:32:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:06.562 22:32:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:06.562 22:32:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:06.562 22:32:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:06.562 22:32:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.562 22:32:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:06.562 22:32:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.562 22:32:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:06.562 ************************************ 00:12:06.562 END TEST nvmf_multitarget 00:12:06.562 ************************************ 00:12:06.562 00:12:06.562 real 0m3.052s 00:12:06.562 user 0m10.081s 00:12:06.562 sys 0m0.741s 00:12:06.562 22:32:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:06.562 22:32:07 -- common/autotest_common.sh@10 -- # set +x 00:12:06.562 22:32:07 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:06.562 22:32:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:06.562 22:32:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:06.562 22:32:07 -- common/autotest_common.sh@10 -- # set +x 00:12:06.562 ************************************ 00:12:06.562 START TEST nvmf_rpc 00:12:06.562 ************************************ 00:12:06.562 22:32:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:06.562 * Looking for test storage... 00:12:06.562 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:06.562 22:32:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:06.562 22:32:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:06.562 22:32:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:06.821 22:32:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:06.821 22:32:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:06.821 22:32:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:06.821 22:32:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:06.821 22:32:07 -- scripts/common.sh@335 -- # IFS=.-: 00:12:06.821 22:32:07 -- scripts/common.sh@335 -- # read -ra ver1 00:12:06.821 22:32:07 -- scripts/common.sh@336 -- # IFS=.-: 00:12:06.821 22:32:07 -- scripts/common.sh@336 -- # read -ra ver2 00:12:06.821 22:32:07 -- scripts/common.sh@337 -- # local 'op=<' 00:12:06.821 22:32:07 -- scripts/common.sh@339 -- # ver1_l=2 00:12:06.821 22:32:07 -- scripts/common.sh@340 -- # ver2_l=1 00:12:06.821 22:32:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:06.821 22:32:07 -- scripts/common.sh@343 -- # case "$op" in 00:12:06.821 22:32:07 -- scripts/common.sh@344 -- # : 1 00:12:06.821 22:32:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:06.821 22:32:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:06.821 22:32:07 -- scripts/common.sh@364 -- # decimal 1 00:12:06.821 22:32:07 -- scripts/common.sh@352 -- # local d=1 00:12:06.821 22:32:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:06.821 22:32:07 -- scripts/common.sh@354 -- # echo 1 00:12:06.821 22:32:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:06.821 22:32:07 -- scripts/common.sh@365 -- # decimal 2 00:12:06.821 22:32:07 -- scripts/common.sh@352 -- # local d=2 00:12:06.821 22:32:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:06.821 22:32:07 -- scripts/common.sh@354 -- # echo 2 00:12:06.821 22:32:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:06.821 22:32:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:06.821 22:32:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:06.821 22:32:07 -- scripts/common.sh@367 -- # return 0 00:12:06.821 22:32:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:06.821 22:32:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:06.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.821 --rc genhtml_branch_coverage=1 00:12:06.821 --rc genhtml_function_coverage=1 00:12:06.821 --rc genhtml_legend=1 00:12:06.821 --rc geninfo_all_blocks=1 00:12:06.821 --rc geninfo_unexecuted_blocks=1 00:12:06.821 00:12:06.821 ' 00:12:06.821 22:32:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:06.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.821 --rc genhtml_branch_coverage=1 00:12:06.821 --rc genhtml_function_coverage=1 00:12:06.821 --rc genhtml_legend=1 00:12:06.821 --rc geninfo_all_blocks=1 00:12:06.821 --rc geninfo_unexecuted_blocks=1 00:12:06.821 00:12:06.821 ' 00:12:06.821 22:32:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:06.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.821 --rc genhtml_branch_coverage=1 00:12:06.821 --rc genhtml_function_coverage=1 00:12:06.821 --rc genhtml_legend=1 00:12:06.821 --rc geninfo_all_blocks=1 00:12:06.821 --rc geninfo_unexecuted_blocks=1 00:12:06.821 00:12:06.821 ' 00:12:06.821 22:32:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:06.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.821 --rc genhtml_branch_coverage=1 00:12:06.821 --rc genhtml_function_coverage=1 00:12:06.821 --rc genhtml_legend=1 00:12:06.821 --rc geninfo_all_blocks=1 00:12:06.821 --rc geninfo_unexecuted_blocks=1 00:12:06.821 00:12:06.821 ' 00:12:06.821 22:32:07 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:06.821 22:32:07 -- nvmf/common.sh@7 -- # uname -s 00:12:06.821 22:32:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:06.821 22:32:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:06.821 22:32:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:06.821 22:32:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:06.821 22:32:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:06.821 22:32:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:06.821 22:32:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:06.821 22:32:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:06.821 22:32:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:06.821 22:32:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:06.821 22:32:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:12:06.821 22:32:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:12:06.821 22:32:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:06.821 22:32:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:06.821 22:32:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:06.821 22:32:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:06.821 22:32:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.821 22:32:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.821 22:32:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.821 22:32:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.822 22:32:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.822 22:32:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.822 22:32:07 -- paths/export.sh@5 -- # export PATH 00:12:06.822 22:32:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.822 22:32:07 -- nvmf/common.sh@46 -- # : 0 00:12:06.822 22:32:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:06.822 22:32:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:06.822 22:32:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:06.822 22:32:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:06.822 22:32:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:06.822 22:32:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:06.822 22:32:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:06.822 22:32:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:06.822 22:32:07 -- target/rpc.sh@11 -- # loops=5 00:12:06.822 22:32:07 -- target/rpc.sh@23 -- # nvmftestinit 00:12:06.822 22:32:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:06.822 22:32:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:06.822 22:32:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:06.822 22:32:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:06.822 22:32:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:06.822 22:32:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.822 22:32:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:06.822 22:32:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.822 22:32:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:06.822 22:32:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:06.822 22:32:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:06.822 22:32:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:06.822 22:32:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:06.822 22:32:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:06.822 22:32:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:06.822 22:32:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:06.822 22:32:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:06.822 22:32:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:06.822 22:32:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:06.822 22:32:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:06.822 22:32:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:06.822 22:32:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:06.822 22:32:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:06.822 22:32:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:06.822 22:32:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:06.822 22:32:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:06.822 22:32:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:06.822 22:32:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:06.822 Cannot find device "nvmf_tgt_br" 00:12:06.822 22:32:07 -- nvmf/common.sh@154 -- # true 00:12:06.822 22:32:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:06.822 Cannot find device "nvmf_tgt_br2" 00:12:06.822 22:32:07 -- nvmf/common.sh@155 -- # true 00:12:06.822 22:32:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:06.822 22:32:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:06.822 Cannot find device "nvmf_tgt_br" 00:12:06.822 22:32:07 -- nvmf/common.sh@157 -- # true 00:12:06.822 22:32:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:06.822 Cannot find device "nvmf_tgt_br2" 00:12:06.822 22:32:07 -- nvmf/common.sh@158 -- # true 00:12:06.822 22:32:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:06.822 22:32:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:06.822 22:32:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:06.822 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:06.822 22:32:07 -- nvmf/common.sh@161 -- # true 00:12:06.822 22:32:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:06.822 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:06.822 22:32:07 -- nvmf/common.sh@162 -- # true 00:12:06.822 22:32:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:07.081 22:32:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:07.081 22:32:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:07.081 22:32:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:07.081 22:32:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:07.081 22:32:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:07.081 22:32:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:07.081 22:32:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:07.081 22:32:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:07.082 22:32:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:07.082 22:32:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:07.082 22:32:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:07.082 22:32:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:07.082 22:32:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:07.082 22:32:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:07.082 22:32:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:07.082 22:32:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:07.082 22:32:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:07.082 22:32:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:07.082 22:32:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:07.082 22:32:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:07.082 22:32:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:07.082 22:32:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:07.082 22:32:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:07.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:12:07.082 00:12:07.082 --- 10.0.0.2 ping statistics --- 00:12:07.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.082 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:12:07.082 22:32:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:07.082 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:07.082 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:12:07.082 00:12:07.082 --- 10.0.0.3 ping statistics --- 00:12:07.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.082 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:12:07.082 22:32:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:07.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:12:07.082 00:12:07.082 --- 10.0.0.1 ping statistics --- 00:12:07.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.082 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:12:07.082 22:32:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.082 22:32:07 -- nvmf/common.sh@421 -- # return 0 00:12:07.082 22:32:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:07.082 22:32:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.082 22:32:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:07.082 22:32:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:07.082 22:32:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.082 22:32:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:07.082 22:32:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:07.082 22:32:07 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:07.082 22:32:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:07.082 22:32:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:07.082 22:32:07 -- common/autotest_common.sh@10 -- # set +x 00:12:07.082 22:32:07 -- nvmf/common.sh@469 -- # nvmfpid=77791 00:12:07.082 22:32:07 -- nvmf/common.sh@470 -- # waitforlisten 77791 00:12:07.082 22:32:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.082 22:32:07 -- common/autotest_common.sh@829 -- # '[' -z 77791 ']' 00:12:07.082 22:32:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.082 22:32:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:07.082 22:32:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.082 22:32:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:07.082 22:32:07 -- common/autotest_common.sh@10 -- # set +x 00:12:07.341 [2024-11-20 22:32:07.822402] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:07.341 [2024-11-20 22:32:07.822863] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.341 [2024-11-20 22:32:07.954913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.341 [2024-11-20 22:32:08.023535] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:07.341 [2024-11-20 22:32:08.023691] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.341 [2024-11-20 22:32:08.023704] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.341 [2024-11-20 22:32:08.023712] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.341 [2024-11-20 22:32:08.023864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.341 [2024-11-20 22:32:08.024786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.341 [2024-11-20 22:32:08.024924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:07.341 [2024-11-20 22:32:08.024936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.275 22:32:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:08.275 22:32:08 -- common/autotest_common.sh@862 -- # return 0 00:12:08.275 22:32:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:08.275 22:32:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:08.275 22:32:08 -- common/autotest_common.sh@10 -- # set +x 00:12:08.276 22:32:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.276 22:32:08 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:08.276 22:32:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.276 22:32:08 -- common/autotest_common.sh@10 -- # set +x 00:12:08.276 22:32:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.276 22:32:08 -- target/rpc.sh@26 -- # stats='{ 00:12:08.276 "poll_groups": [ 00:12:08.276 { 00:12:08.276 "admin_qpairs": 0, 00:12:08.276 "completed_nvme_io": 0, 00:12:08.276 "current_admin_qpairs": 0, 00:12:08.276 "current_io_qpairs": 0, 00:12:08.276 "io_qpairs": 0, 00:12:08.276 "name": "nvmf_tgt_poll_group_0", 00:12:08.276 "pending_bdev_io": 0, 00:12:08.276 "transports": [] 00:12:08.276 }, 00:12:08.276 { 00:12:08.276 "admin_qpairs": 0, 00:12:08.276 "completed_nvme_io": 0, 00:12:08.276 "current_admin_qpairs": 0, 00:12:08.276 "current_io_qpairs": 0, 00:12:08.276 "io_qpairs": 0, 00:12:08.276 "name": "nvmf_tgt_poll_group_1", 00:12:08.276 "pending_bdev_io": 0, 00:12:08.276 "transports": [] 00:12:08.276 }, 00:12:08.276 { 00:12:08.276 "admin_qpairs": 0, 00:12:08.276 "completed_nvme_io": 0, 00:12:08.276 "current_admin_qpairs": 0, 00:12:08.276 "current_io_qpairs": 0, 00:12:08.276 "io_qpairs": 0, 00:12:08.276 "name": "nvmf_tgt_poll_group_2", 00:12:08.276 "pending_bdev_io": 0, 00:12:08.276 "transports": [] 00:12:08.276 }, 00:12:08.276 { 00:12:08.276 "admin_qpairs": 0, 00:12:08.276 "completed_nvme_io": 0, 00:12:08.276 "current_admin_qpairs": 0, 00:12:08.276 "current_io_qpairs": 0, 00:12:08.276 "io_qpairs": 0, 00:12:08.276 "name": "nvmf_tgt_poll_group_3", 00:12:08.276 "pending_bdev_io": 0, 00:12:08.276 "transports": [] 00:12:08.276 } 00:12:08.276 ], 00:12:08.276 "tick_rate": 2200000000 00:12:08.276 }' 00:12:08.276 22:32:08 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:08.276 22:32:08 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:08.276 22:32:08 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:08.276 22:32:08 -- target/rpc.sh@15 -- # wc -l 00:12:08.276 22:32:08 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:08.276 22:32:08 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:08.276 22:32:08 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:08.276 22:32:08 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:08.276 22:32:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.276 22:32:08 -- common/autotest_common.sh@10 -- # set +x 00:12:08.276 [2024-11-20 22:32:08.921778] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.276 22:32:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.276 22:32:08 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:08.276 22:32:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.276 22:32:08 -- common/autotest_common.sh@10 -- # set +x 00:12:08.276 22:32:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.276 22:32:08 -- target/rpc.sh@33 -- # stats='{ 00:12:08.276 "poll_groups": [ 00:12:08.276 { 00:12:08.276 "admin_qpairs": 0, 00:12:08.276 "completed_nvme_io": 0, 00:12:08.276 "current_admin_qpairs": 0, 00:12:08.276 "current_io_qpairs": 0, 00:12:08.276 "io_qpairs": 0, 00:12:08.276 "name": "nvmf_tgt_poll_group_0", 00:12:08.276 "pending_bdev_io": 0, 00:12:08.276 "transports": [ 00:12:08.276 { 00:12:08.276 "trtype": "TCP" 00:12:08.276 } 00:12:08.276 ] 00:12:08.276 }, 00:12:08.276 { 00:12:08.276 "admin_qpairs": 0, 00:12:08.276 "completed_nvme_io": 0, 00:12:08.276 "current_admin_qpairs": 0, 00:12:08.276 "current_io_qpairs": 0, 00:12:08.276 "io_qpairs": 0, 00:12:08.276 "name": "nvmf_tgt_poll_group_1", 00:12:08.276 "pending_bdev_io": 0, 00:12:08.276 "transports": [ 00:12:08.276 { 00:12:08.276 "trtype": "TCP" 00:12:08.276 } 00:12:08.276 ] 00:12:08.276 }, 00:12:08.276 { 00:12:08.276 "admin_qpairs": 0, 00:12:08.276 "completed_nvme_io": 0, 00:12:08.276 "current_admin_qpairs": 0, 00:12:08.276 "current_io_qpairs": 0, 00:12:08.276 "io_qpairs": 0, 00:12:08.276 "name": "nvmf_tgt_poll_group_2", 00:12:08.276 "pending_bdev_io": 0, 00:12:08.276 "transports": [ 00:12:08.276 { 00:12:08.276 "trtype": "TCP" 00:12:08.276 } 00:12:08.276 ] 00:12:08.276 }, 00:12:08.276 { 00:12:08.276 "admin_qpairs": 0, 00:12:08.276 "completed_nvme_io": 0, 00:12:08.276 "current_admin_qpairs": 0, 00:12:08.276 "current_io_qpairs": 0, 00:12:08.276 "io_qpairs": 0, 00:12:08.276 "name": "nvmf_tgt_poll_group_3", 00:12:08.276 "pending_bdev_io": 0, 00:12:08.276 "transports": [ 00:12:08.276 { 00:12:08.276 "trtype": "TCP" 00:12:08.276 } 00:12:08.276 ] 00:12:08.276 } 00:12:08.276 ], 00:12:08.276 "tick_rate": 2200000000 00:12:08.276 }' 00:12:08.276 22:32:08 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:08.276 22:32:08 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:08.276 22:32:08 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:08.276 22:32:08 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:08.535 22:32:09 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:08.535 22:32:09 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:08.535 22:32:09 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:08.535 22:32:09 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:08.535 22:32:09 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:08.535 22:32:09 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:08.535 22:32:09 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:08.535 22:32:09 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:08.535 22:32:09 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:08.535 22:32:09 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:08.535 22:32:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.535 22:32:09 -- common/autotest_common.sh@10 -- # set +x 00:12:08.535 Malloc1 00:12:08.535 22:32:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.535 22:32:09 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:08.535 22:32:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.535 22:32:09 -- common/autotest_common.sh@10 -- # set +x 00:12:08.535 22:32:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.535 22:32:09 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:08.535 22:32:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.535 22:32:09 -- common/autotest_common.sh@10 -- # set +x 00:12:08.535 22:32:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.535 22:32:09 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:08.535 22:32:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.535 22:32:09 -- common/autotest_common.sh@10 -- # set +x 00:12:08.535 22:32:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.535 22:32:09 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.535 22:32:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.535 22:32:09 -- common/autotest_common.sh@10 -- # set +x 00:12:08.535 [2024-11-20 22:32:09.128971] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.535 22:32:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.535 22:32:09 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 -a 10.0.0.2 -s 4420 00:12:08.535 22:32:09 -- common/autotest_common.sh@650 -- # local es=0 00:12:08.535 22:32:09 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 -a 10.0.0.2 -s 4420 00:12:08.535 22:32:09 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:08.535 22:32:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:08.535 22:32:09 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:08.535 22:32:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:08.535 22:32:09 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:08.535 22:32:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:08.535 22:32:09 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:08.535 22:32:09 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:08.535 22:32:09 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 -a 10.0.0.2 -s 4420 00:12:08.535 [2024-11-20 22:32:09.161321] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27' 00:12:08.535 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:08.535 could not add new controller: failed to write to nvme-fabrics device 00:12:08.535 22:32:09 -- common/autotest_common.sh@653 -- # es=1 00:12:08.535 22:32:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:08.535 22:32:09 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:08.535 22:32:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:08.535 22:32:09 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:12:08.535 22:32:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.535 22:32:09 -- common/autotest_common.sh@10 -- # set +x 00:12:08.535 22:32:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.535 22:32:09 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:08.794 22:32:09 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:08.794 22:32:09 -- common/autotest_common.sh@1187 -- # local i=0 00:12:08.794 22:32:09 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.794 22:32:09 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:08.794 22:32:09 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:10.698 22:32:11 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:10.699 22:32:11 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:10.699 22:32:11 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:10.699 22:32:11 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:10.699 22:32:11 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:10.699 22:32:11 -- common/autotest_common.sh@1197 -- # return 0 00:12:10.699 22:32:11 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:10.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.699 22:32:11 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:10.699 22:32:11 -- common/autotest_common.sh@1208 -- # local i=0 00:12:10.699 22:32:11 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:10.699 22:32:11 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.699 22:32:11 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.699 22:32:11 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:10.958 22:32:11 -- common/autotest_common.sh@1220 -- # return 0 00:12:10.958 22:32:11 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:12:10.958 22:32:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.959 22:32:11 -- common/autotest_common.sh@10 -- # set +x 00:12:10.959 22:32:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.959 22:32:11 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.959 22:32:11 -- common/autotest_common.sh@650 -- # local es=0 00:12:10.959 22:32:11 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.959 22:32:11 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:10.959 22:32:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:10.959 22:32:11 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:10.959 22:32:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:10.959 22:32:11 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:10.959 22:32:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:10.959 22:32:11 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:10.959 22:32:11 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:10.959 22:32:11 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.959 [2024-11-20 22:32:11.472887] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27' 00:12:10.959 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:10.959 could not add new controller: failed to write to nvme-fabrics device 00:12:10.959 22:32:11 -- common/autotest_common.sh@653 -- # es=1 00:12:10.959 22:32:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:10.959 22:32:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:10.959 22:32:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:10.959 22:32:11 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:10.959 22:32:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.959 22:32:11 -- common/autotest_common.sh@10 -- # set +x 00:12:10.959 22:32:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.959 22:32:11 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.959 22:32:11 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:10.959 22:32:11 -- common/autotest_common.sh@1187 -- # local i=0 00:12:10.959 22:32:11 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.959 22:32:11 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:10.959 22:32:11 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:13.491 22:32:13 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:13.491 22:32:13 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:13.491 22:32:13 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:13.491 22:32:13 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:13.491 22:32:13 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:13.491 22:32:13 -- common/autotest_common.sh@1197 -- # return 0 00:12:13.491 22:32:13 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:13.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.491 22:32:13 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:13.491 22:32:13 -- common/autotest_common.sh@1208 -- # local i=0 00:12:13.491 22:32:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:13.491 22:32:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.491 22:32:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:13.491 22:32:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.491 22:32:13 -- common/autotest_common.sh@1220 -- # return 0 00:12:13.491 22:32:13 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.491 22:32:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.491 22:32:13 -- common/autotest_common.sh@10 -- # set +x 00:12:13.491 22:32:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.491 22:32:13 -- target/rpc.sh@81 -- # seq 1 5 00:12:13.491 22:32:13 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:13.491 22:32:13 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:13.491 22:32:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.491 22:32:13 -- common/autotest_common.sh@10 -- # set +x 00:12:13.491 22:32:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.491 22:32:13 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.491 22:32:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.491 22:32:13 -- common/autotest_common.sh@10 -- # set +x 00:12:13.491 [2024-11-20 22:32:13.773941] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.491 22:32:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.491 22:32:13 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:13.491 22:32:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.491 22:32:13 -- common/autotest_common.sh@10 -- # set +x 00:12:13.491 22:32:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.491 22:32:13 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:13.491 22:32:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.491 22:32:13 -- common/autotest_common.sh@10 -- # set +x 00:12:13.491 22:32:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.491 22:32:13 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:13.491 22:32:13 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:13.491 22:32:13 -- common/autotest_common.sh@1187 -- # local i=0 00:12:13.491 22:32:13 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:13.491 22:32:13 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:13.491 22:32:13 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:15.393 22:32:15 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:15.393 22:32:15 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:15.393 22:32:15 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:15.393 22:32:15 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:15.393 22:32:15 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:15.393 22:32:15 -- common/autotest_common.sh@1197 -- # return 0 00:12:15.393 22:32:15 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.393 22:32:16 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:15.393 22:32:16 -- common/autotest_common.sh@1208 -- # local i=0 00:12:15.393 22:32:16 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:15.393 22:32:16 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.393 22:32:16 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:15.393 22:32:16 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.393 22:32:16 -- common/autotest_common.sh@1220 -- # return 0 00:12:15.393 22:32:16 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:15.393 22:32:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.393 22:32:16 -- common/autotest_common.sh@10 -- # set +x 00:12:15.393 22:32:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.393 22:32:16 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.393 22:32:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.393 22:32:16 -- common/autotest_common.sh@10 -- # set +x 00:12:15.393 22:32:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.393 22:32:16 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:15.393 22:32:16 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:15.393 22:32:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.393 22:32:16 -- common/autotest_common.sh@10 -- # set +x 00:12:15.393 22:32:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.393 22:32:16 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.393 22:32:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.393 22:32:16 -- common/autotest_common.sh@10 -- # set +x 00:12:15.393 [2024-11-20 22:32:16.090180] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.393 22:32:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.393 22:32:16 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:15.393 22:32:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.393 22:32:16 -- common/autotest_common.sh@10 -- # set +x 00:12:15.393 22:32:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.393 22:32:16 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:15.393 22:32:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.393 22:32:16 -- common/autotest_common.sh@10 -- # set +x 00:12:15.393 22:32:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.393 22:32:16 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:15.651 22:32:16 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:15.651 22:32:16 -- common/autotest_common.sh@1187 -- # local i=0 00:12:15.651 22:32:16 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:15.651 22:32:16 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:15.651 22:32:16 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:17.554 22:32:18 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:17.813 22:32:18 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:17.813 22:32:18 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.813 22:32:18 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:17.813 22:32:18 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.813 22:32:18 -- common/autotest_common.sh@1197 -- # return 0 00:12:17.813 22:32:18 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:17.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.813 22:32:18 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:17.813 22:32:18 -- common/autotest_common.sh@1208 -- # local i=0 00:12:17.813 22:32:18 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:17.813 22:32:18 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.813 22:32:18 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:17.813 22:32:18 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.813 22:32:18 -- common/autotest_common.sh@1220 -- # return 0 00:12:17.813 22:32:18 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:17.813 22:32:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.813 22:32:18 -- common/autotest_common.sh@10 -- # set +x 00:12:17.813 22:32:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.813 22:32:18 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.813 22:32:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.813 22:32:18 -- common/autotest_common.sh@10 -- # set +x 00:12:17.813 22:32:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.813 22:32:18 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:17.813 22:32:18 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:17.813 22:32:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.813 22:32:18 -- common/autotest_common.sh@10 -- # set +x 00:12:17.813 22:32:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.813 22:32:18 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.813 22:32:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.813 22:32:18 -- common/autotest_common.sh@10 -- # set +x 00:12:17.813 [2024-11-20 22:32:18.498931] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.813 22:32:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.813 22:32:18 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:17.813 22:32:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.814 22:32:18 -- common/autotest_common.sh@10 -- # set +x 00:12:17.814 22:32:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.814 22:32:18 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:17.814 22:32:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.814 22:32:18 -- common/autotest_common.sh@10 -- # set +x 00:12:17.814 22:32:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.814 22:32:18 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:18.072 22:32:18 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:18.072 22:32:18 -- common/autotest_common.sh@1187 -- # local i=0 00:12:18.072 22:32:18 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:18.072 22:32:18 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:18.072 22:32:18 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:19.976 22:32:20 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:19.976 22:32:20 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:19.976 22:32:20 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:20.235 22:32:20 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:20.235 22:32:20 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:20.235 22:32:20 -- common/autotest_common.sh@1197 -- # return 0 00:12:20.235 22:32:20 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:20.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.235 22:32:20 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:20.235 22:32:20 -- common/autotest_common.sh@1208 -- # local i=0 00:12:20.235 22:32:20 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:20.235 22:32:20 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.235 22:32:20 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:20.235 22:32:20 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.235 22:32:20 -- common/autotest_common.sh@1220 -- # return 0 00:12:20.235 22:32:20 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:20.235 22:32:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.235 22:32:20 -- common/autotest_common.sh@10 -- # set +x 00:12:20.235 22:32:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.235 22:32:20 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:20.235 22:32:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.235 22:32:20 -- common/autotest_common.sh@10 -- # set +x 00:12:20.235 22:32:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.235 22:32:20 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:20.235 22:32:20 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:20.235 22:32:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.235 22:32:20 -- common/autotest_common.sh@10 -- # set +x 00:12:20.235 22:32:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.235 22:32:20 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.235 22:32:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.235 22:32:20 -- common/autotest_common.sh@10 -- # set +x 00:12:20.235 [2024-11-20 22:32:20.908309] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.235 22:32:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.235 22:32:20 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:20.235 22:32:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.235 22:32:20 -- common/autotest_common.sh@10 -- # set +x 00:12:20.235 22:32:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.235 22:32:20 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:20.235 22:32:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.235 22:32:20 -- common/autotest_common.sh@10 -- # set +x 00:12:20.235 22:32:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.235 22:32:20 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:20.494 22:32:21 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:20.494 22:32:21 -- common/autotest_common.sh@1187 -- # local i=0 00:12:20.494 22:32:21 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:20.494 22:32:21 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:20.494 22:32:21 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:22.397 22:32:23 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:22.397 22:32:23 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:22.397 22:32:23 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:22.397 22:32:23 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:22.397 22:32:23 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:22.397 22:32:23 -- common/autotest_common.sh@1197 -- # return 0 00:12:22.397 22:32:23 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:22.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.656 22:32:23 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:22.656 22:32:23 -- common/autotest_common.sh@1208 -- # local i=0 00:12:22.656 22:32:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:22.656 22:32:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.656 22:32:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:22.656 22:32:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.656 22:32:23 -- common/autotest_common.sh@1220 -- # return 0 00:12:22.656 22:32:23 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:22.656 22:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.656 22:32:23 -- common/autotest_common.sh@10 -- # set +x 00:12:22.656 22:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.656 22:32:23 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.656 22:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.656 22:32:23 -- common/autotest_common.sh@10 -- # set +x 00:12:22.656 22:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.656 22:32:23 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:22.656 22:32:23 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:22.656 22:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.656 22:32:23 -- common/autotest_common.sh@10 -- # set +x 00:12:22.656 22:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.656 22:32:23 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.656 22:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.656 22:32:23 -- common/autotest_common.sh@10 -- # set +x 00:12:22.656 [2024-11-20 22:32:23.216587] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.656 22:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.656 22:32:23 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:22.656 22:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.656 22:32:23 -- common/autotest_common.sh@10 -- # set +x 00:12:22.656 22:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.656 22:32:23 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:22.656 22:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.656 22:32:23 -- common/autotest_common.sh@10 -- # set +x 00:12:22.656 22:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.656 22:32:23 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.914 22:32:23 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:22.914 22:32:23 -- common/autotest_common.sh@1187 -- # local i=0 00:12:22.914 22:32:23 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.914 22:32:23 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:22.914 22:32:23 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:24.818 22:32:25 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:24.818 22:32:25 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:24.818 22:32:25 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:24.818 22:32:25 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:24.818 22:32:25 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:24.818 22:32:25 -- common/autotest_common.sh@1197 -- # return 0 00:12:24.818 22:32:25 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:24.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.818 22:32:25 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:24.818 22:32:25 -- common/autotest_common.sh@1208 -- # local i=0 00:12:24.818 22:32:25 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.818 22:32:25 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:24.818 22:32:25 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:24.818 22:32:25 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.818 22:32:25 -- common/autotest_common.sh@1220 -- # return 0 00:12:24.818 22:32:25 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:24.818 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.818 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:24.818 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.818 22:32:25 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.818 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.818 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:24.818 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.818 22:32:25 -- target/rpc.sh@99 -- # seq 1 5 00:12:24.818 22:32:25 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:24.818 22:32:25 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:24.818 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.818 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:24.818 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.818 22:32:25 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.818 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.818 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:24.818 [2024-11-20 22:32:25.521124] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.818 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.818 22:32:25 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:24.818 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.818 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:24.818 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.818 22:32:25 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:24.818 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.818 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:24.818 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.818 22:32:25 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.818 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.818 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:25.080 22:32:25 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 [2024-11-20 22:32:25.569229] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:25.080 22:32:25 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 [2024-11-20 22:32:25.621262] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:25.080 22:32:25 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 [2024-11-20 22:32:25.669383] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:25.080 22:32:25 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 [2024-11-20 22:32:25.717444] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:25.080 22:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.080 22:32:25 -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 22:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.080 22:32:25 -- target/rpc.sh@110 -- # stats='{ 00:12:25.080 "poll_groups": [ 00:12:25.080 { 00:12:25.080 "admin_qpairs": 2, 00:12:25.080 "completed_nvme_io": 115, 00:12:25.080 "current_admin_qpairs": 0, 00:12:25.080 "current_io_qpairs": 0, 00:12:25.080 "io_qpairs": 16, 00:12:25.080 "name": "nvmf_tgt_poll_group_0", 00:12:25.080 "pending_bdev_io": 0, 00:12:25.080 "transports": [ 00:12:25.080 { 00:12:25.080 "trtype": "TCP" 00:12:25.080 } 00:12:25.080 ] 00:12:25.080 }, 00:12:25.080 { 00:12:25.080 "admin_qpairs": 3, 00:12:25.080 "completed_nvme_io": 129, 00:12:25.080 "current_admin_qpairs": 0, 00:12:25.080 "current_io_qpairs": 0, 00:12:25.080 "io_qpairs": 17, 00:12:25.080 "name": "nvmf_tgt_poll_group_1", 00:12:25.080 "pending_bdev_io": 0, 00:12:25.080 "transports": [ 00:12:25.080 { 00:12:25.080 "trtype": "TCP" 00:12:25.080 } 00:12:25.080 ] 00:12:25.080 }, 00:12:25.080 { 00:12:25.080 "admin_qpairs": 1, 00:12:25.080 "completed_nvme_io": 108, 00:12:25.080 "current_admin_qpairs": 0, 00:12:25.080 "current_io_qpairs": 0, 00:12:25.080 "io_qpairs": 19, 00:12:25.080 "name": "nvmf_tgt_poll_group_2", 00:12:25.080 "pending_bdev_io": 0, 00:12:25.080 "transports": [ 00:12:25.080 { 00:12:25.080 "trtype": "TCP" 00:12:25.080 } 00:12:25.080 ] 00:12:25.080 }, 00:12:25.080 { 00:12:25.080 "admin_qpairs": 1, 00:12:25.080 "completed_nvme_io": 68, 00:12:25.080 "current_admin_qpairs": 0, 00:12:25.080 "current_io_qpairs": 0, 00:12:25.080 "io_qpairs": 18, 00:12:25.081 "name": "nvmf_tgt_poll_group_3", 00:12:25.081 "pending_bdev_io": 0, 00:12:25.081 "transports": [ 00:12:25.081 { 00:12:25.081 "trtype": "TCP" 00:12:25.081 } 00:12:25.081 ] 00:12:25.081 } 00:12:25.081 ], 00:12:25.081 "tick_rate": 2200000000 00:12:25.081 }' 00:12:25.081 22:32:25 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:25.081 22:32:25 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:25.081 22:32:25 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:25.081 22:32:25 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:25.349 22:32:25 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:25.349 22:32:25 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:25.349 22:32:25 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:25.349 22:32:25 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:25.349 22:32:25 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:25.349 22:32:25 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:25.349 22:32:25 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:25.349 22:32:25 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:25.349 22:32:25 -- target/rpc.sh@123 -- # nvmftestfini 00:12:25.349 22:32:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:25.349 22:32:25 -- nvmf/common.sh@116 -- # sync 00:12:25.349 22:32:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:25.349 22:32:25 -- nvmf/common.sh@119 -- # set +e 00:12:25.350 22:32:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:25.350 22:32:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:25.350 rmmod nvme_tcp 00:12:25.350 rmmod nvme_fabrics 00:12:25.350 rmmod nvme_keyring 00:12:25.350 22:32:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:25.350 22:32:25 -- nvmf/common.sh@123 -- # set -e 00:12:25.350 22:32:25 -- nvmf/common.sh@124 -- # return 0 00:12:25.350 22:32:25 -- nvmf/common.sh@477 -- # '[' -n 77791 ']' 00:12:25.350 22:32:25 -- nvmf/common.sh@478 -- # killprocess 77791 00:12:25.350 22:32:25 -- common/autotest_common.sh@936 -- # '[' -z 77791 ']' 00:12:25.350 22:32:25 -- common/autotest_common.sh@940 -- # kill -0 77791 00:12:25.350 22:32:25 -- common/autotest_common.sh@941 -- # uname 00:12:25.350 22:32:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:25.350 22:32:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77791 00:12:25.350 22:32:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:25.350 22:32:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:25.350 killing process with pid 77791 00:12:25.350 22:32:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77791' 00:12:25.350 22:32:26 -- common/autotest_common.sh@955 -- # kill 77791 00:12:25.350 22:32:26 -- common/autotest_common.sh@960 -- # wait 77791 00:12:25.618 22:32:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:25.618 22:32:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:25.618 22:32:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:25.618 22:32:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:25.618 22:32:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:25.618 22:32:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.618 22:32:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.618 22:32:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.618 22:32:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:25.618 00:12:25.618 real 0m19.137s 00:12:25.618 user 1m12.381s 00:12:25.618 sys 0m2.084s 00:12:25.618 22:32:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:25.618 22:32:26 -- common/autotest_common.sh@10 -- # set +x 00:12:25.618 ************************************ 00:12:25.618 END TEST nvmf_rpc 00:12:25.618 ************************************ 00:12:25.884 22:32:26 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:25.884 22:32:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:25.884 22:32:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:25.884 22:32:26 -- common/autotest_common.sh@10 -- # set +x 00:12:25.884 ************************************ 00:12:25.884 START TEST nvmf_invalid 00:12:25.884 ************************************ 00:12:25.884 22:32:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:25.884 * Looking for test storage... 00:12:25.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:25.884 22:32:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:25.884 22:32:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:25.884 22:32:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:25.884 22:32:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:25.884 22:32:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:25.884 22:32:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:25.884 22:32:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:25.884 22:32:26 -- scripts/common.sh@335 -- # IFS=.-: 00:12:25.884 22:32:26 -- scripts/common.sh@335 -- # read -ra ver1 00:12:25.884 22:32:26 -- scripts/common.sh@336 -- # IFS=.-: 00:12:25.884 22:32:26 -- scripts/common.sh@336 -- # read -ra ver2 00:12:25.884 22:32:26 -- scripts/common.sh@337 -- # local 'op=<' 00:12:25.884 22:32:26 -- scripts/common.sh@339 -- # ver1_l=2 00:12:25.884 22:32:26 -- scripts/common.sh@340 -- # ver2_l=1 00:12:25.884 22:32:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:25.884 22:32:26 -- scripts/common.sh@343 -- # case "$op" in 00:12:25.884 22:32:26 -- scripts/common.sh@344 -- # : 1 00:12:25.884 22:32:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:25.884 22:32:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:25.884 22:32:26 -- scripts/common.sh@364 -- # decimal 1 00:12:25.884 22:32:26 -- scripts/common.sh@352 -- # local d=1 00:12:25.884 22:32:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.884 22:32:26 -- scripts/common.sh@354 -- # echo 1 00:12:25.884 22:32:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:25.884 22:32:26 -- scripts/common.sh@365 -- # decimal 2 00:12:25.884 22:32:26 -- scripts/common.sh@352 -- # local d=2 00:12:25.884 22:32:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:25.884 22:32:26 -- scripts/common.sh@354 -- # echo 2 00:12:25.884 22:32:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:25.884 22:32:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:25.884 22:32:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:25.884 22:32:26 -- scripts/common.sh@367 -- # return 0 00:12:25.884 22:32:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:25.884 22:32:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:25.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.884 --rc genhtml_branch_coverage=1 00:12:25.884 --rc genhtml_function_coverage=1 00:12:25.884 --rc genhtml_legend=1 00:12:25.884 --rc geninfo_all_blocks=1 00:12:25.884 --rc geninfo_unexecuted_blocks=1 00:12:25.884 00:12:25.884 ' 00:12:25.884 22:32:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:25.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.884 --rc genhtml_branch_coverage=1 00:12:25.884 --rc genhtml_function_coverage=1 00:12:25.884 --rc genhtml_legend=1 00:12:25.884 --rc geninfo_all_blocks=1 00:12:25.884 --rc geninfo_unexecuted_blocks=1 00:12:25.884 00:12:25.884 ' 00:12:25.884 22:32:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:25.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.884 --rc genhtml_branch_coverage=1 00:12:25.884 --rc genhtml_function_coverage=1 00:12:25.884 --rc genhtml_legend=1 00:12:25.884 --rc geninfo_all_blocks=1 00:12:25.884 --rc geninfo_unexecuted_blocks=1 00:12:25.884 00:12:25.884 ' 00:12:25.884 22:32:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:25.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.884 --rc genhtml_branch_coverage=1 00:12:25.884 --rc genhtml_function_coverage=1 00:12:25.884 --rc genhtml_legend=1 00:12:25.884 --rc geninfo_all_blocks=1 00:12:25.884 --rc geninfo_unexecuted_blocks=1 00:12:25.884 00:12:25.884 ' 00:12:25.884 22:32:26 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:25.884 22:32:26 -- nvmf/common.sh@7 -- # uname -s 00:12:25.884 22:32:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.884 22:32:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.884 22:32:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.884 22:32:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.884 22:32:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.884 22:32:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.884 22:32:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.884 22:32:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.884 22:32:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.884 22:32:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.884 22:32:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:12:25.884 22:32:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:12:25.884 22:32:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.884 22:32:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.884 22:32:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:25.884 22:32:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:25.884 22:32:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.884 22:32:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.884 22:32:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.884 22:32:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.884 22:32:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.884 22:32:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.884 22:32:26 -- paths/export.sh@5 -- # export PATH 00:12:25.885 22:32:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.885 22:32:26 -- nvmf/common.sh@46 -- # : 0 00:12:25.885 22:32:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:25.885 22:32:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:25.885 22:32:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:25.885 22:32:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.885 22:32:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.885 22:32:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:25.885 22:32:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:25.885 22:32:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:25.885 22:32:26 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:25.885 22:32:26 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:25.885 22:32:26 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:25.885 22:32:26 -- target/invalid.sh@14 -- # target=foobar 00:12:25.885 22:32:26 -- target/invalid.sh@16 -- # RANDOM=0 00:12:25.885 22:32:26 -- target/invalid.sh@34 -- # nvmftestinit 00:12:25.885 22:32:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:25.885 22:32:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.885 22:32:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:25.885 22:32:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:25.885 22:32:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:25.885 22:32:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.885 22:32:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.885 22:32:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.885 22:32:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:25.885 22:32:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:25.885 22:32:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:25.885 22:32:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:25.885 22:32:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:25.885 22:32:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:25.885 22:32:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.885 22:32:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.885 22:32:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:25.885 22:32:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:25.885 22:32:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:25.885 22:32:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:25.885 22:32:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:25.885 22:32:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.885 22:32:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:25.885 22:32:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:25.885 22:32:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:25.885 22:32:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:25.885 22:32:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:25.885 22:32:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:25.885 Cannot find device "nvmf_tgt_br" 00:12:25.885 22:32:26 -- nvmf/common.sh@154 -- # true 00:12:25.885 22:32:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:26.143 Cannot find device "nvmf_tgt_br2" 00:12:26.143 22:32:26 -- nvmf/common.sh@155 -- # true 00:12:26.143 22:32:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:26.143 22:32:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:26.143 Cannot find device "nvmf_tgt_br" 00:12:26.143 22:32:26 -- nvmf/common.sh@157 -- # true 00:12:26.143 22:32:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:26.143 Cannot find device "nvmf_tgt_br2" 00:12:26.143 22:32:26 -- nvmf/common.sh@158 -- # true 00:12:26.144 22:32:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:26.144 22:32:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:26.144 22:32:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:26.144 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:26.144 22:32:26 -- nvmf/common.sh@161 -- # true 00:12:26.144 22:32:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:26.144 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:26.144 22:32:26 -- nvmf/common.sh@162 -- # true 00:12:26.144 22:32:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:26.144 22:32:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:26.144 22:32:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:26.144 22:32:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:26.144 22:32:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:26.144 22:32:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:26.144 22:32:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:26.144 22:32:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:26.144 22:32:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:26.144 22:32:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:26.144 22:32:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:26.144 22:32:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:26.144 22:32:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:26.144 22:32:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:26.144 22:32:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:26.144 22:32:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:26.144 22:32:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:26.144 22:32:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:26.144 22:32:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:26.144 22:32:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:26.144 22:32:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:26.144 22:32:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:26.144 22:32:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:26.403 22:32:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:26.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:12:26.403 00:12:26.403 --- 10.0.0.2 ping statistics --- 00:12:26.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.403 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:12:26.403 22:32:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:26.403 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:26.403 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:12:26.403 00:12:26.403 --- 10.0.0.3 ping statistics --- 00:12:26.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.403 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:12:26.403 22:32:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:26.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:12:26.403 00:12:26.403 --- 10.0.0.1 ping statistics --- 00:12:26.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.403 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:12:26.403 22:32:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.403 22:32:26 -- nvmf/common.sh@421 -- # return 0 00:12:26.403 22:32:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:26.403 22:32:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.403 22:32:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:26.403 22:32:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:26.403 22:32:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.403 22:32:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:26.403 22:32:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:26.403 22:32:26 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:26.403 22:32:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:26.403 22:32:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:26.403 22:32:26 -- common/autotest_common.sh@10 -- # set +x 00:12:26.403 22:32:26 -- nvmf/common.sh@469 -- # nvmfpid=78313 00:12:26.403 22:32:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:26.403 22:32:26 -- nvmf/common.sh@470 -- # waitforlisten 78313 00:12:26.403 22:32:26 -- common/autotest_common.sh@829 -- # '[' -z 78313 ']' 00:12:26.403 22:32:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.403 22:32:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:26.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.403 22:32:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.403 22:32:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:26.403 22:32:26 -- common/autotest_common.sh@10 -- # set +x 00:12:26.403 [2024-11-20 22:32:26.966751] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:26.403 [2024-11-20 22:32:26.966864] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.403 [2024-11-20 22:32:27.113041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.662 [2024-11-20 22:32:27.196041] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:26.662 [2024-11-20 22:32:27.196243] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.662 [2024-11-20 22:32:27.196261] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.663 [2024-11-20 22:32:27.196287] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.663 [2024-11-20 22:32:27.196478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.663 [2024-11-20 22:32:27.197442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.663 [2024-11-20 22:32:27.197527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.663 [2024-11-20 22:32:27.197542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.230 22:32:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:27.230 22:32:27 -- common/autotest_common.sh@862 -- # return 0 00:12:27.230 22:32:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:27.230 22:32:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:27.230 22:32:27 -- common/autotest_common.sh@10 -- # set +x 00:12:27.230 22:32:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.230 22:32:27 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:27.230 22:32:27 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode109 00:12:27.489 [2024-11-20 22:32:28.194740] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:27.489 22:32:28 -- target/invalid.sh@40 -- # out='2024/11/20 22:32:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode109 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:27.489 request: 00:12:27.489 { 00:12:27.489 "method": "nvmf_create_subsystem", 00:12:27.489 "params": { 00:12:27.489 "nqn": "nqn.2016-06.io.spdk:cnode109", 00:12:27.489 "tgt_name": "foobar" 00:12:27.489 } 00:12:27.489 } 00:12:27.489 Got JSON-RPC error response 00:12:27.489 GoRPCClient: error on JSON-RPC call' 00:12:27.489 22:32:28 -- target/invalid.sh@41 -- # [[ 2024/11/20 22:32:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode109 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:27.489 request: 00:12:27.489 { 00:12:27.489 "method": "nvmf_create_subsystem", 00:12:27.489 "params": { 00:12:27.489 "nqn": "nqn.2016-06.io.spdk:cnode109", 00:12:27.489 "tgt_name": "foobar" 00:12:27.489 } 00:12:27.489 } 00:12:27.489 Got JSON-RPC error response 00:12:27.489 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:27.489 22:32:28 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:27.489 22:32:28 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode26309 00:12:27.755 [2024-11-20 22:32:28.483170] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26309: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:28.015 22:32:28 -- target/invalid.sh@45 -- # out='2024/11/20 22:32:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode26309 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:28.015 request: 00:12:28.015 { 00:12:28.015 "method": "nvmf_create_subsystem", 00:12:28.015 "params": { 00:12:28.015 "nqn": "nqn.2016-06.io.spdk:cnode26309", 00:12:28.015 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:28.015 } 00:12:28.015 } 00:12:28.015 Got JSON-RPC error response 00:12:28.015 GoRPCClient: error on JSON-RPC call' 00:12:28.015 22:32:28 -- target/invalid.sh@46 -- # [[ 2024/11/20 22:32:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode26309 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:28.015 request: 00:12:28.015 { 00:12:28.015 "method": "nvmf_create_subsystem", 00:12:28.015 "params": { 00:12:28.015 "nqn": "nqn.2016-06.io.spdk:cnode26309", 00:12:28.015 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:28.015 } 00:12:28.015 } 00:12:28.015 Got JSON-RPC error response 00:12:28.015 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:28.015 22:32:28 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:28.015 22:32:28 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode28689 00:12:28.274 [2024-11-20 22:32:28.787538] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28689: invalid model number 'SPDK_Controller' 00:12:28.274 22:32:28 -- target/invalid.sh@50 -- # out='2024/11/20 22:32:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode28689], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:28.274 request: 00:12:28.274 { 00:12:28.274 "method": "nvmf_create_subsystem", 00:12:28.274 "params": { 00:12:28.274 "nqn": "nqn.2016-06.io.spdk:cnode28689", 00:12:28.274 "model_number": "SPDK_Controller\u001f" 00:12:28.274 } 00:12:28.274 } 00:12:28.274 Got JSON-RPC error response 00:12:28.274 GoRPCClient: error on JSON-RPC call' 00:12:28.275 22:32:28 -- target/invalid.sh@51 -- # [[ 2024/11/20 22:32:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode28689], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:28.275 request: 00:12:28.275 { 00:12:28.275 "method": "nvmf_create_subsystem", 00:12:28.275 "params": { 00:12:28.275 "nqn": "nqn.2016-06.io.spdk:cnode28689", 00:12:28.275 "model_number": "SPDK_Controller\u001f" 00:12:28.275 } 00:12:28.275 } 00:12:28.275 Got JSON-RPC error response 00:12:28.275 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:28.275 22:32:28 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:28.275 22:32:28 -- target/invalid.sh@19 -- # local length=21 ll 00:12:28.275 22:32:28 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:28.275 22:32:28 -- target/invalid.sh@21 -- # local chars 00:12:28.275 22:32:28 -- target/invalid.sh@22 -- # local string 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # printf %x 77 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # string+=M 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # printf %x 121 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # string+=y 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # printf %x 109 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # string+=m 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # printf %x 34 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # string+='"' 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # printf %x 95 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # string+=_ 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # printf %x 122 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # string+=z 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # printf %x 110 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # string+=n 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # printf %x 109 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # string+=m 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # printf %x 95 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # string+=_ 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # printf %x 58 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # string+=: 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # printf %x 117 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # string+=u 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # printf %x 109 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # string+=m 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # printf %x 33 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # string+='!' 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # printf %x 74 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # string+=J 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # printf %x 104 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # string+=h 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # printf %x 43 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # string+=+ 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # printf %x 121 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # string+=y 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # printf %x 86 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # string+=V 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # printf %x 32 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # string+=' ' 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # printf %x 120 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # string+=x 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # printf %x 115 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:28.275 22:32:28 -- target/invalid.sh@25 -- # string+=s 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.275 22:32:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.275 22:32:28 -- target/invalid.sh@28 -- # [[ M == \- ]] 00:12:28.275 22:32:28 -- target/invalid.sh@31 -- # echo 'Mym"_znm_:um!Jh+yV xs' 00:12:28.275 22:32:28 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Mym"_znm_:um!Jh+yV xs' nqn.2016-06.io.spdk:cnode12029 00:12:28.535 [2024-11-20 22:32:29.204153] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12029: invalid serial number 'Mym"_znm_:um!Jh+yV xs' 00:12:28.535 22:32:29 -- target/invalid.sh@54 -- # out='2024/11/20 22:32:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12029 serial_number:Mym"_znm_:um!Jh+yV xs], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN Mym"_znm_:um!Jh+yV xs 00:12:28.535 request: 00:12:28.535 { 00:12:28.535 "method": "nvmf_create_subsystem", 00:12:28.535 "params": { 00:12:28.535 "nqn": "nqn.2016-06.io.spdk:cnode12029", 00:12:28.535 "serial_number": "Mym\"_znm_:um!Jh+yV xs" 00:12:28.535 } 00:12:28.535 } 00:12:28.535 Got JSON-RPC error response 00:12:28.535 GoRPCClient: error on JSON-RPC call' 00:12:28.535 22:32:29 -- target/invalid.sh@55 -- # [[ 2024/11/20 22:32:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12029 serial_number:Mym"_znm_:um!Jh+yV xs], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN Mym"_znm_:um!Jh+yV xs 00:12:28.535 request: 00:12:28.535 { 00:12:28.535 "method": "nvmf_create_subsystem", 00:12:28.535 "params": { 00:12:28.535 "nqn": "nqn.2016-06.io.spdk:cnode12029", 00:12:28.535 "serial_number": "Mym\"_znm_:um!Jh+yV xs" 00:12:28.535 } 00:12:28.535 } 00:12:28.535 Got JSON-RPC error response 00:12:28.535 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:28.535 22:32:29 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:28.535 22:32:29 -- target/invalid.sh@19 -- # local length=41 ll 00:12:28.535 22:32:29 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:28.535 22:32:29 -- target/invalid.sh@21 -- # local chars 00:12:28.535 22:32:29 -- target/invalid.sh@22 -- # local string 00:12:28.535 22:32:29 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:28.535 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.535 22:32:29 -- target/invalid.sh@25 -- # printf %x 114 00:12:28.535 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:28.535 22:32:29 -- target/invalid.sh@25 -- # string+=r 00:12:28.535 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.535 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.535 22:32:29 -- target/invalid.sh@25 -- # printf %x 36 00:12:28.535 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:28.535 22:32:29 -- target/invalid.sh@25 -- # string+='$' 00:12:28.535 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.535 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.535 22:32:29 -- target/invalid.sh@25 -- # printf %x 49 00:12:28.535 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:28.535 22:32:29 -- target/invalid.sh@25 -- # string+=1 00:12:28.535 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.535 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.535 22:32:29 -- target/invalid.sh@25 -- # printf %x 52 00:12:28.535 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:28.535 22:32:29 -- target/invalid.sh@25 -- # string+=4 00:12:28.535 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.535 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.535 22:32:29 -- target/invalid.sh@25 -- # printf %x 91 00:12:28.535 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:28.535 22:32:29 -- target/invalid.sh@25 -- # string+='[' 00:12:28.535 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.535 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.535 22:32:29 -- target/invalid.sh@25 -- # printf %x 79 00:12:28.535 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:28.535 22:32:29 -- target/invalid.sh@25 -- # string+=O 00:12:28.535 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.535 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # printf %x 92 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # string+='\' 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # printf %x 120 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # string+=x 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # printf %x 95 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # string+=_ 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # printf %x 120 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # string+=x 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # printf %x 104 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # string+=h 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # printf %x 48 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # string+=0 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # printf %x 92 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # string+='\' 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # printf %x 79 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # string+=O 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # printf %x 52 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # string+=4 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # printf %x 32 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # string+=' ' 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # printf %x 118 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # string+=v 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # printf %x 62 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # string+='>' 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # printf %x 63 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # string+='?' 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # printf %x 73 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # string+=I 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # printf %x 49 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # string+=1 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # printf %x 92 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # string+='\' 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # printf %x 108 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # string+=l 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.795 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # printf %x 70 00:12:28.795 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # string+=F 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # printf %x 100 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # string+=d 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # printf %x 109 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # string+=m 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # printf %x 79 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # string+=O 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # printf %x 38 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # string+='&' 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # printf %x 42 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # string+='*' 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # printf %x 58 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # string+=: 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # printf %x 118 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # string+=v 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # printf %x 66 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # string+=B 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # printf %x 44 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # string+=, 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # printf %x 85 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # string+=U 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # printf %x 119 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # string+=w 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # printf %x 41 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # string+=')' 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # printf %x 116 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # string+=t 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # printf %x 105 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # string+=i 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # printf %x 68 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # string+=D 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # printf %x 58 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # string+=: 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # printf %x 88 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:28.796 22:32:29 -- target/invalid.sh@25 -- # string+=X 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.796 22:32:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.796 22:32:29 -- target/invalid.sh@28 -- # [[ r == \- ]] 00:12:28.796 22:32:29 -- target/invalid.sh@31 -- # echo 'r$14[O\x_xh0\O4 v>?I1\lFdmO&*:vB,Uw)tiD:X' 00:12:28.796 22:32:29 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'r$14[O\x_xh0\O4 v>?I1\lFdmO&*:vB,Uw)tiD:X' nqn.2016-06.io.spdk:cnode13767 00:12:29.055 [2024-11-20 22:32:29.736982] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13767: invalid model number 'r$14[O\x_xh0\O4 v>?I1\lFdmO&*:vB,Uw)tiD:X' 00:12:29.055 22:32:29 -- target/invalid.sh@58 -- # out='2024/11/20 22:32:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:r$14[O\x_xh0\O4 v>?I1\lFdmO&*:vB,Uw)tiD:X nqn:nqn.2016-06.io.spdk:cnode13767], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN r$14[O\x_xh0\O4 v>?I1\lFdmO&*:vB,Uw)tiD:X 00:12:29.055 request: 00:12:29.055 { 00:12:29.055 "method": "nvmf_create_subsystem", 00:12:29.055 "params": { 00:12:29.055 "nqn": "nqn.2016-06.io.spdk:cnode13767", 00:12:29.055 "model_number": "r$14[O\\x_xh0\\O4 v>?I1\\lFdmO&*:vB,Uw)tiD:X" 00:12:29.055 } 00:12:29.055 } 00:12:29.055 Got JSON-RPC error response 00:12:29.055 GoRPCClient: error on JSON-RPC call' 00:12:29.055 22:32:29 -- target/invalid.sh@59 -- # [[ 2024/11/20 22:32:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:r$14[O\x_xh0\O4 v>?I1\lFdmO&*:vB,Uw)tiD:X nqn:nqn.2016-06.io.spdk:cnode13767], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN r$14[O\x_xh0\O4 v>?I1\lFdmO&*:vB,Uw)tiD:X 00:12:29.055 request: 00:12:29.055 { 00:12:29.055 "method": "nvmf_create_subsystem", 00:12:29.055 "params": { 00:12:29.055 "nqn": "nqn.2016-06.io.spdk:cnode13767", 00:12:29.055 "model_number": "r$14[O\\x_xh0\\O4 v>?I1\\lFdmO&*:vB,Uw)tiD:X" 00:12:29.055 } 00:12:29.055 } 00:12:29.055 Got JSON-RPC error response 00:12:29.055 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:29.055 22:32:29 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:29.314 [2024-11-20 22:32:30.033516] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.573 22:32:30 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:29.831 22:32:30 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:29.831 22:32:30 -- target/invalid.sh@67 -- # echo '' 00:12:29.831 22:32:30 -- target/invalid.sh@67 -- # head -n 1 00:12:29.831 22:32:30 -- target/invalid.sh@67 -- # IP= 00:12:29.831 22:32:30 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:30.090 [2024-11-20 22:32:30.576205] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:30.090 22:32:30 -- target/invalid.sh@69 -- # out='2024/11/20 22:32:30 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:30.090 request: 00:12:30.090 { 00:12:30.090 "method": "nvmf_subsystem_remove_listener", 00:12:30.090 "params": { 00:12:30.090 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:30.090 "listen_address": { 00:12:30.090 "trtype": "tcp", 00:12:30.090 "traddr": "", 00:12:30.090 "trsvcid": "4421" 00:12:30.090 } 00:12:30.090 } 00:12:30.090 } 00:12:30.090 Got JSON-RPC error response 00:12:30.090 GoRPCClient: error on JSON-RPC call' 00:12:30.090 22:32:30 -- target/invalid.sh@70 -- # [[ 2024/11/20 22:32:30 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:30.090 request: 00:12:30.090 { 00:12:30.090 "method": "nvmf_subsystem_remove_listener", 00:12:30.090 "params": { 00:12:30.090 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:30.090 "listen_address": { 00:12:30.090 "trtype": "tcp", 00:12:30.090 "traddr": "", 00:12:30.090 "trsvcid": "4421" 00:12:30.090 } 00:12:30.090 } 00:12:30.090 } 00:12:30.091 Got JSON-RPC error response 00:12:30.091 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:30.091 22:32:30 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31329 -i 0 00:12:30.091 [2024-11-20 22:32:30.800491] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31329: invalid cntlid range [0-65519] 00:12:30.091 22:32:30 -- target/invalid.sh@73 -- # out='2024/11/20 22:32:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode31329], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:30.091 request: 00:12:30.091 { 00:12:30.091 "method": "nvmf_create_subsystem", 00:12:30.091 "params": { 00:12:30.091 "nqn": "nqn.2016-06.io.spdk:cnode31329", 00:12:30.091 "min_cntlid": 0 00:12:30.091 } 00:12:30.091 } 00:12:30.091 Got JSON-RPC error response 00:12:30.091 GoRPCClient: error on JSON-RPC call' 00:12:30.091 22:32:30 -- target/invalid.sh@74 -- # [[ 2024/11/20 22:32:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode31329], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:30.091 request: 00:12:30.091 { 00:12:30.091 "method": "nvmf_create_subsystem", 00:12:30.091 "params": { 00:12:30.091 "nqn": "nqn.2016-06.io.spdk:cnode31329", 00:12:30.091 "min_cntlid": 0 00:12:30.091 } 00:12:30.091 } 00:12:30.091 Got JSON-RPC error response 00:12:30.091 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:30.359 22:32:30 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2509 -i 65520 00:12:30.359 [2024-11-20 22:32:31.016851] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2509: invalid cntlid range [65520-65519] 00:12:30.359 22:32:31 -- target/invalid.sh@75 -- # out='2024/11/20 22:32:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode2509], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:30.359 request: 00:12:30.359 { 00:12:30.359 "method": "nvmf_create_subsystem", 00:12:30.359 "params": { 00:12:30.359 "nqn": "nqn.2016-06.io.spdk:cnode2509", 00:12:30.359 "min_cntlid": 65520 00:12:30.359 } 00:12:30.359 } 00:12:30.359 Got JSON-RPC error response 00:12:30.359 GoRPCClient: error on JSON-RPC call' 00:12:30.359 22:32:31 -- target/invalid.sh@76 -- # [[ 2024/11/20 22:32:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode2509], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:30.359 request: 00:12:30.359 { 00:12:30.359 "method": "nvmf_create_subsystem", 00:12:30.359 "params": { 00:12:30.359 "nqn": "nqn.2016-06.io.spdk:cnode2509", 00:12:30.359 "min_cntlid": 65520 00:12:30.359 } 00:12:30.360 } 00:12:30.360 Got JSON-RPC error response 00:12:30.360 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:30.360 22:32:31 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode382 -I 0 00:12:30.625 [2024-11-20 22:32:31.233212] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode382: invalid cntlid range [1-0] 00:12:30.625 22:32:31 -- target/invalid.sh@77 -- # out='2024/11/20 22:32:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode382], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:30.625 request: 00:12:30.625 { 00:12:30.625 "method": "nvmf_create_subsystem", 00:12:30.625 "params": { 00:12:30.625 "nqn": "nqn.2016-06.io.spdk:cnode382", 00:12:30.625 "max_cntlid": 0 00:12:30.625 } 00:12:30.625 } 00:12:30.625 Got JSON-RPC error response 00:12:30.625 GoRPCClient: error on JSON-RPC call' 00:12:30.625 22:32:31 -- target/invalid.sh@78 -- # [[ 2024/11/20 22:32:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode382], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:30.625 request: 00:12:30.625 { 00:12:30.625 "method": "nvmf_create_subsystem", 00:12:30.625 "params": { 00:12:30.625 "nqn": "nqn.2016-06.io.spdk:cnode382", 00:12:30.625 "max_cntlid": 0 00:12:30.625 } 00:12:30.625 } 00:12:30.625 Got JSON-RPC error response 00:12:30.625 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:30.625 22:32:31 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2864 -I 65520 00:12:30.883 [2024-11-20 22:32:31.461578] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2864: invalid cntlid range [1-65520] 00:12:30.883 22:32:31 -- target/invalid.sh@79 -- # out='2024/11/20 22:32:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode2864], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:30.883 request: 00:12:30.883 { 00:12:30.884 "method": "nvmf_create_subsystem", 00:12:30.884 "params": { 00:12:30.884 "nqn": "nqn.2016-06.io.spdk:cnode2864", 00:12:30.884 "max_cntlid": 65520 00:12:30.884 } 00:12:30.884 } 00:12:30.884 Got JSON-RPC error response 00:12:30.884 GoRPCClient: error on JSON-RPC call' 00:12:30.884 22:32:31 -- target/invalid.sh@80 -- # [[ 2024/11/20 22:32:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode2864], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:30.884 request: 00:12:30.884 { 00:12:30.884 "method": "nvmf_create_subsystem", 00:12:30.884 "params": { 00:12:30.884 "nqn": "nqn.2016-06.io.spdk:cnode2864", 00:12:30.884 "max_cntlid": 65520 00:12:30.884 } 00:12:30.884 } 00:12:30.884 Got JSON-RPC error response 00:12:30.884 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:30.884 22:32:31 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29785 -i 6 -I 5 00:12:31.143 [2024-11-20 22:32:31.754057] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29785: invalid cntlid range [6-5] 00:12:31.143 22:32:31 -- target/invalid.sh@83 -- # out='2024/11/20 22:32:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode29785], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:31.143 request: 00:12:31.143 { 00:12:31.143 "method": "nvmf_create_subsystem", 00:12:31.143 "params": { 00:12:31.143 "nqn": "nqn.2016-06.io.spdk:cnode29785", 00:12:31.143 "min_cntlid": 6, 00:12:31.143 "max_cntlid": 5 00:12:31.143 } 00:12:31.143 } 00:12:31.143 Got JSON-RPC error response 00:12:31.143 GoRPCClient: error on JSON-RPC call' 00:12:31.143 22:32:31 -- target/invalid.sh@84 -- # [[ 2024/11/20 22:32:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode29785], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:31.143 request: 00:12:31.143 { 00:12:31.143 "method": "nvmf_create_subsystem", 00:12:31.143 "params": { 00:12:31.143 "nqn": "nqn.2016-06.io.spdk:cnode29785", 00:12:31.143 "min_cntlid": 6, 00:12:31.143 "max_cntlid": 5 00:12:31.143 } 00:12:31.143 } 00:12:31.143 Got JSON-RPC error response 00:12:31.143 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:31.143 22:32:31 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:31.402 22:32:31 -- target/invalid.sh@87 -- # out='request: 00:12:31.402 { 00:12:31.402 "name": "foobar", 00:12:31.402 "method": "nvmf_delete_target", 00:12:31.402 "req_id": 1 00:12:31.402 } 00:12:31.402 Got JSON-RPC error response 00:12:31.402 response: 00:12:31.402 { 00:12:31.402 "code": -32602, 00:12:31.402 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:31.402 }' 00:12:31.402 22:32:31 -- target/invalid.sh@88 -- # [[ request: 00:12:31.402 { 00:12:31.402 "name": "foobar", 00:12:31.402 "method": "nvmf_delete_target", 00:12:31.402 "req_id": 1 00:12:31.402 } 00:12:31.402 Got JSON-RPC error response 00:12:31.402 response: 00:12:31.402 { 00:12:31.402 "code": -32602, 00:12:31.402 "message": "The specified target doesn't exist, cannot delete it." 00:12:31.402 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:31.402 22:32:31 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:31.402 22:32:31 -- target/invalid.sh@91 -- # nvmftestfini 00:12:31.402 22:32:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:31.402 22:32:31 -- nvmf/common.sh@116 -- # sync 00:12:31.402 22:32:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:31.402 22:32:31 -- nvmf/common.sh@119 -- # set +e 00:12:31.402 22:32:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:31.402 22:32:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:31.402 rmmod nvme_tcp 00:12:31.402 rmmod nvme_fabrics 00:12:31.402 rmmod nvme_keyring 00:12:31.402 22:32:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:31.402 22:32:31 -- nvmf/common.sh@123 -- # set -e 00:12:31.402 22:32:31 -- nvmf/common.sh@124 -- # return 0 00:12:31.402 22:32:31 -- nvmf/common.sh@477 -- # '[' -n 78313 ']' 00:12:31.402 22:32:31 -- nvmf/common.sh@478 -- # killprocess 78313 00:12:31.402 22:32:31 -- common/autotest_common.sh@936 -- # '[' -z 78313 ']' 00:12:31.402 22:32:31 -- common/autotest_common.sh@940 -- # kill -0 78313 00:12:31.402 22:32:31 -- common/autotest_common.sh@941 -- # uname 00:12:31.402 22:32:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:31.402 22:32:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78313 00:12:31.402 22:32:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:31.402 22:32:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:31.402 killing process with pid 78313 00:12:31.402 22:32:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78313' 00:12:31.403 22:32:32 -- common/autotest_common.sh@955 -- # kill 78313 00:12:31.403 22:32:32 -- common/autotest_common.sh@960 -- # wait 78313 00:12:31.662 22:32:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:31.662 22:32:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:31.662 22:32:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:31.662 22:32:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:31.662 22:32:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:31.662 22:32:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.662 22:32:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.662 22:32:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.662 22:32:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:31.662 00:12:31.662 real 0m5.937s 00:12:31.662 user 0m23.510s 00:12:31.662 sys 0m1.372s 00:12:31.662 22:32:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:31.662 22:32:32 -- common/autotest_common.sh@10 -- # set +x 00:12:31.662 ************************************ 00:12:31.662 END TEST nvmf_invalid 00:12:31.662 ************************************ 00:12:31.662 22:32:32 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:31.662 22:32:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:31.662 22:32:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.662 22:32:32 -- common/autotest_common.sh@10 -- # set +x 00:12:31.662 ************************************ 00:12:31.662 START TEST nvmf_abort 00:12:31.662 ************************************ 00:12:31.662 22:32:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:31.922 * Looking for test storage... 00:12:31.922 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:31.922 22:32:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:31.922 22:32:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:31.922 22:32:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:31.922 22:32:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:31.922 22:32:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:31.922 22:32:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:31.922 22:32:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:31.922 22:32:32 -- scripts/common.sh@335 -- # IFS=.-: 00:12:31.922 22:32:32 -- scripts/common.sh@335 -- # read -ra ver1 00:12:31.922 22:32:32 -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.922 22:32:32 -- scripts/common.sh@336 -- # read -ra ver2 00:12:31.922 22:32:32 -- scripts/common.sh@337 -- # local 'op=<' 00:12:31.922 22:32:32 -- scripts/common.sh@339 -- # ver1_l=2 00:12:31.922 22:32:32 -- scripts/common.sh@340 -- # ver2_l=1 00:12:31.922 22:32:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:31.922 22:32:32 -- scripts/common.sh@343 -- # case "$op" in 00:12:31.922 22:32:32 -- scripts/common.sh@344 -- # : 1 00:12:31.922 22:32:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:31.922 22:32:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.922 22:32:32 -- scripts/common.sh@364 -- # decimal 1 00:12:31.922 22:32:32 -- scripts/common.sh@352 -- # local d=1 00:12:31.922 22:32:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.922 22:32:32 -- scripts/common.sh@354 -- # echo 1 00:12:31.922 22:32:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:31.922 22:32:32 -- scripts/common.sh@365 -- # decimal 2 00:12:31.922 22:32:32 -- scripts/common.sh@352 -- # local d=2 00:12:31.922 22:32:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.922 22:32:32 -- scripts/common.sh@354 -- # echo 2 00:12:31.922 22:32:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:31.922 22:32:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:31.922 22:32:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:31.922 22:32:32 -- scripts/common.sh@367 -- # return 0 00:12:31.922 22:32:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.922 22:32:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:31.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.922 --rc genhtml_branch_coverage=1 00:12:31.922 --rc genhtml_function_coverage=1 00:12:31.922 --rc genhtml_legend=1 00:12:31.922 --rc geninfo_all_blocks=1 00:12:31.922 --rc geninfo_unexecuted_blocks=1 00:12:31.922 00:12:31.922 ' 00:12:31.923 22:32:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:31.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.923 --rc genhtml_branch_coverage=1 00:12:31.923 --rc genhtml_function_coverage=1 00:12:31.923 --rc genhtml_legend=1 00:12:31.923 --rc geninfo_all_blocks=1 00:12:31.923 --rc geninfo_unexecuted_blocks=1 00:12:31.923 00:12:31.923 ' 00:12:31.923 22:32:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:31.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.923 --rc genhtml_branch_coverage=1 00:12:31.923 --rc genhtml_function_coverage=1 00:12:31.923 --rc genhtml_legend=1 00:12:31.923 --rc geninfo_all_blocks=1 00:12:31.923 --rc geninfo_unexecuted_blocks=1 00:12:31.923 00:12:31.923 ' 00:12:31.923 22:32:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:31.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.923 --rc genhtml_branch_coverage=1 00:12:31.923 --rc genhtml_function_coverage=1 00:12:31.923 --rc genhtml_legend=1 00:12:31.923 --rc geninfo_all_blocks=1 00:12:31.923 --rc geninfo_unexecuted_blocks=1 00:12:31.923 00:12:31.923 ' 00:12:31.923 22:32:32 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:31.923 22:32:32 -- nvmf/common.sh@7 -- # uname -s 00:12:31.923 22:32:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.923 22:32:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.923 22:32:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.923 22:32:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.923 22:32:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.923 22:32:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.923 22:32:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.923 22:32:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.923 22:32:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.923 22:32:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.923 22:32:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:12:31.923 22:32:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:12:31.923 22:32:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.923 22:32:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.923 22:32:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:31.923 22:32:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:31.923 22:32:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.923 22:32:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.923 22:32:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.923 22:32:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.923 22:32:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.923 22:32:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.923 22:32:32 -- paths/export.sh@5 -- # export PATH 00:12:31.923 22:32:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.923 22:32:32 -- nvmf/common.sh@46 -- # : 0 00:12:31.923 22:32:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:31.923 22:32:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:31.923 22:32:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:31.923 22:32:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.923 22:32:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.923 22:32:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:31.923 22:32:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:31.923 22:32:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:31.923 22:32:32 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:31.923 22:32:32 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:31.923 22:32:32 -- target/abort.sh@14 -- # nvmftestinit 00:12:31.923 22:32:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:31.923 22:32:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.923 22:32:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:31.923 22:32:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:31.923 22:32:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:31.923 22:32:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.923 22:32:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.923 22:32:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.923 22:32:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:31.923 22:32:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:31.923 22:32:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:31.923 22:32:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:31.923 22:32:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:31.923 22:32:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:31.923 22:32:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.923 22:32:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.923 22:32:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:31.923 22:32:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:31.923 22:32:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:31.923 22:32:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:31.923 22:32:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:31.923 22:32:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.923 22:32:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:31.923 22:32:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:31.923 22:32:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:31.923 22:32:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:31.923 22:32:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:31.923 22:32:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:31.923 Cannot find device "nvmf_tgt_br" 00:12:31.923 22:32:32 -- nvmf/common.sh@154 -- # true 00:12:31.923 22:32:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:31.923 Cannot find device "nvmf_tgt_br2" 00:12:31.923 22:32:32 -- nvmf/common.sh@155 -- # true 00:12:31.923 22:32:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:31.923 22:32:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:31.923 Cannot find device "nvmf_tgt_br" 00:12:31.923 22:32:32 -- nvmf/common.sh@157 -- # true 00:12:31.923 22:32:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:31.923 Cannot find device "nvmf_tgt_br2" 00:12:31.923 22:32:32 -- nvmf/common.sh@158 -- # true 00:12:31.923 22:32:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:32.183 22:32:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:32.183 22:32:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:32.183 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:32.183 22:32:32 -- nvmf/common.sh@161 -- # true 00:12:32.183 22:32:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:32.183 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:32.183 22:32:32 -- nvmf/common.sh@162 -- # true 00:12:32.183 22:32:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:32.183 22:32:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:32.183 22:32:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:32.183 22:32:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:32.183 22:32:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:32.183 22:32:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:32.183 22:32:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:32.183 22:32:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:32.183 22:32:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:32.183 22:32:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:32.183 22:32:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:32.183 22:32:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:32.183 22:32:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:32.183 22:32:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:32.183 22:32:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:32.183 22:32:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:32.183 22:32:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:32.183 22:32:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:32.183 22:32:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:32.183 22:32:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:32.183 22:32:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:32.183 22:32:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:32.183 22:32:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:32.183 22:32:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:32.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:12:32.183 00:12:32.183 --- 10.0.0.2 ping statistics --- 00:12:32.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.183 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:12:32.183 22:32:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:32.183 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:32.183 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:12:32.183 00:12:32.183 --- 10.0.0.3 ping statistics --- 00:12:32.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.183 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:12:32.183 22:32:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:32.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:32.183 00:12:32.183 --- 10.0.0.1 ping statistics --- 00:12:32.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.183 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:32.183 22:32:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.183 22:32:32 -- nvmf/common.sh@421 -- # return 0 00:12:32.183 22:32:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:32.183 22:32:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.183 22:32:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:32.183 22:32:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:32.183 22:32:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.183 22:32:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:32.183 22:32:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:32.443 22:32:32 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:32.443 22:32:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:32.443 22:32:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:32.443 22:32:32 -- common/autotest_common.sh@10 -- # set +x 00:12:32.443 22:32:32 -- nvmf/common.sh@469 -- # nvmfpid=78836 00:12:32.443 22:32:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:32.443 22:32:32 -- nvmf/common.sh@470 -- # waitforlisten 78836 00:12:32.443 22:32:32 -- common/autotest_common.sh@829 -- # '[' -z 78836 ']' 00:12:32.443 22:32:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.443 22:32:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:32.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.443 22:32:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.443 22:32:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:32.443 22:32:32 -- common/autotest_common.sh@10 -- # set +x 00:12:32.443 [2024-11-20 22:32:32.964535] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:32.443 [2024-11-20 22:32:32.964600] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.443 [2024-11-20 22:32:33.093839] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:32.443 [2024-11-20 22:32:33.153899] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:32.443 [2024-11-20 22:32:33.154046] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.443 [2024-11-20 22:32:33.154059] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.443 [2024-11-20 22:32:33.154068] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.443 [2024-11-20 22:32:33.155064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.443 [2024-11-20 22:32:33.155213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.443 [2024-11-20 22:32:33.155236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.381 22:32:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:33.381 22:32:34 -- common/autotest_common.sh@862 -- # return 0 00:12:33.381 22:32:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:33.381 22:32:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:33.381 22:32:34 -- common/autotest_common.sh@10 -- # set +x 00:12:33.381 22:32:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.381 22:32:34 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:33.381 22:32:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.381 22:32:34 -- common/autotest_common.sh@10 -- # set +x 00:12:33.381 [2024-11-20 22:32:34.059141] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.381 22:32:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.381 22:32:34 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:33.381 22:32:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.381 22:32:34 -- common/autotest_common.sh@10 -- # set +x 00:12:33.381 Malloc0 00:12:33.381 22:32:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.381 22:32:34 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:33.381 22:32:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.381 22:32:34 -- common/autotest_common.sh@10 -- # set +x 00:12:33.381 Delay0 00:12:33.381 22:32:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.381 22:32:34 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:33.381 22:32:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.381 22:32:34 -- common/autotest_common.sh@10 -- # set +x 00:12:33.640 22:32:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.640 22:32:34 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:33.640 22:32:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.640 22:32:34 -- common/autotest_common.sh@10 -- # set +x 00:12:33.640 22:32:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.640 22:32:34 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:33.640 22:32:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.640 22:32:34 -- common/autotest_common.sh@10 -- # set +x 00:12:33.640 [2024-11-20 22:32:34.130887] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.640 22:32:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.640 22:32:34 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:33.640 22:32:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.640 22:32:34 -- common/autotest_common.sh@10 -- # set +x 00:12:33.640 22:32:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.640 22:32:34 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:33.640 [2024-11-20 22:32:34.306906] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:36.176 Initializing NVMe Controllers 00:12:36.176 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:36.176 controller IO queue size 128 less than required 00:12:36.176 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:36.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:36.176 Initialization complete. Launching workers. 00:12:36.176 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 40084 00:12:36.176 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 40145, failed to submit 62 00:12:36.176 success 40084, unsuccess 61, failed 0 00:12:36.176 22:32:36 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:36.176 22:32:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.176 22:32:36 -- common/autotest_common.sh@10 -- # set +x 00:12:36.176 22:32:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.176 22:32:36 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:36.176 22:32:36 -- target/abort.sh@38 -- # nvmftestfini 00:12:36.176 22:32:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:36.176 22:32:36 -- nvmf/common.sh@116 -- # sync 00:12:36.176 22:32:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:36.176 22:32:36 -- nvmf/common.sh@119 -- # set +e 00:12:36.176 22:32:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:36.176 22:32:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:36.176 rmmod nvme_tcp 00:12:36.176 rmmod nvme_fabrics 00:12:36.176 rmmod nvme_keyring 00:12:36.176 22:32:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:36.176 22:32:36 -- nvmf/common.sh@123 -- # set -e 00:12:36.176 22:32:36 -- nvmf/common.sh@124 -- # return 0 00:12:36.176 22:32:36 -- nvmf/common.sh@477 -- # '[' -n 78836 ']' 00:12:36.176 22:32:36 -- nvmf/common.sh@478 -- # killprocess 78836 00:12:36.176 22:32:36 -- common/autotest_common.sh@936 -- # '[' -z 78836 ']' 00:12:36.176 22:32:36 -- common/autotest_common.sh@940 -- # kill -0 78836 00:12:36.176 22:32:36 -- common/autotest_common.sh@941 -- # uname 00:12:36.176 22:32:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:36.176 22:32:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78836 00:12:36.176 22:32:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:36.176 22:32:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:36.176 killing process with pid 78836 00:12:36.176 22:32:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78836' 00:12:36.176 22:32:36 -- common/autotest_common.sh@955 -- # kill 78836 00:12:36.176 22:32:36 -- common/autotest_common.sh@960 -- # wait 78836 00:12:36.176 22:32:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:36.176 22:32:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:36.176 22:32:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:36.176 22:32:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:36.176 22:32:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:36.176 22:32:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.176 22:32:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.176 22:32:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.176 22:32:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:36.176 00:12:36.176 real 0m4.374s 00:12:36.176 user 0m12.677s 00:12:36.176 sys 0m0.985s 00:12:36.176 22:32:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:36.176 ************************************ 00:12:36.176 END TEST nvmf_abort 00:12:36.176 22:32:36 -- common/autotest_common.sh@10 -- # set +x 00:12:36.176 ************************************ 00:12:36.176 22:32:36 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:36.176 22:32:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:36.176 22:32:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:36.176 22:32:36 -- common/autotest_common.sh@10 -- # set +x 00:12:36.176 ************************************ 00:12:36.176 START TEST nvmf_ns_hotplug_stress 00:12:36.176 ************************************ 00:12:36.176 22:32:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:36.176 * Looking for test storage... 00:12:36.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:36.176 22:32:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:36.176 22:32:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:36.176 22:32:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:36.436 22:32:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:36.436 22:32:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:36.436 22:32:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:36.436 22:32:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:36.436 22:32:36 -- scripts/common.sh@335 -- # IFS=.-: 00:12:36.436 22:32:36 -- scripts/common.sh@335 -- # read -ra ver1 00:12:36.436 22:32:36 -- scripts/common.sh@336 -- # IFS=.-: 00:12:36.436 22:32:36 -- scripts/common.sh@336 -- # read -ra ver2 00:12:36.436 22:32:36 -- scripts/common.sh@337 -- # local 'op=<' 00:12:36.436 22:32:36 -- scripts/common.sh@339 -- # ver1_l=2 00:12:36.436 22:32:36 -- scripts/common.sh@340 -- # ver2_l=1 00:12:36.436 22:32:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:36.436 22:32:36 -- scripts/common.sh@343 -- # case "$op" in 00:12:36.436 22:32:36 -- scripts/common.sh@344 -- # : 1 00:12:36.436 22:32:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:36.436 22:32:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:36.436 22:32:36 -- scripts/common.sh@364 -- # decimal 1 00:12:36.436 22:32:36 -- scripts/common.sh@352 -- # local d=1 00:12:36.436 22:32:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:36.436 22:32:36 -- scripts/common.sh@354 -- # echo 1 00:12:36.436 22:32:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:36.436 22:32:36 -- scripts/common.sh@365 -- # decimal 2 00:12:36.436 22:32:36 -- scripts/common.sh@352 -- # local d=2 00:12:36.436 22:32:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:36.436 22:32:36 -- scripts/common.sh@354 -- # echo 2 00:12:36.436 22:32:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:36.436 22:32:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:36.436 22:32:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:36.436 22:32:36 -- scripts/common.sh@367 -- # return 0 00:12:36.436 22:32:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:36.436 22:32:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:36.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.436 --rc genhtml_branch_coverage=1 00:12:36.436 --rc genhtml_function_coverage=1 00:12:36.436 --rc genhtml_legend=1 00:12:36.436 --rc geninfo_all_blocks=1 00:12:36.436 --rc geninfo_unexecuted_blocks=1 00:12:36.436 00:12:36.436 ' 00:12:36.436 22:32:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:36.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.436 --rc genhtml_branch_coverage=1 00:12:36.436 --rc genhtml_function_coverage=1 00:12:36.436 --rc genhtml_legend=1 00:12:36.436 --rc geninfo_all_blocks=1 00:12:36.436 --rc geninfo_unexecuted_blocks=1 00:12:36.436 00:12:36.436 ' 00:12:36.436 22:32:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:36.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.436 --rc genhtml_branch_coverage=1 00:12:36.436 --rc genhtml_function_coverage=1 00:12:36.436 --rc genhtml_legend=1 00:12:36.436 --rc geninfo_all_blocks=1 00:12:36.436 --rc geninfo_unexecuted_blocks=1 00:12:36.436 00:12:36.436 ' 00:12:36.436 22:32:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:36.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.436 --rc genhtml_branch_coverage=1 00:12:36.436 --rc genhtml_function_coverage=1 00:12:36.436 --rc genhtml_legend=1 00:12:36.436 --rc geninfo_all_blocks=1 00:12:36.436 --rc geninfo_unexecuted_blocks=1 00:12:36.436 00:12:36.436 ' 00:12:36.436 22:32:36 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:36.436 22:32:36 -- nvmf/common.sh@7 -- # uname -s 00:12:36.436 22:32:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.436 22:32:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.436 22:32:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.436 22:32:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.436 22:32:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.436 22:32:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.436 22:32:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.436 22:32:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.436 22:32:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.436 22:32:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.436 22:32:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:12:36.436 22:32:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:12:36.436 22:32:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.436 22:32:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.436 22:32:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:36.436 22:32:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:36.436 22:32:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.436 22:32:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.436 22:32:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.436 22:32:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.436 22:32:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.436 22:32:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.436 22:32:36 -- paths/export.sh@5 -- # export PATH 00:12:36.436 22:32:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.436 22:32:36 -- nvmf/common.sh@46 -- # : 0 00:12:36.436 22:32:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:36.436 22:32:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:36.436 22:32:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:36.436 22:32:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.436 22:32:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.436 22:32:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:36.436 22:32:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:36.436 22:32:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:36.436 22:32:36 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:36.436 22:32:36 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:36.436 22:32:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:36.436 22:32:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.436 22:32:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:36.436 22:32:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:36.436 22:32:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:36.436 22:32:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.436 22:32:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.436 22:32:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.436 22:32:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:36.436 22:32:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:36.436 22:32:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:36.436 22:32:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:36.436 22:32:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:36.436 22:32:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:36.436 22:32:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.436 22:32:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.436 22:32:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:36.436 22:32:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:36.436 22:32:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:36.436 22:32:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:36.436 22:32:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:36.436 22:32:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.436 22:32:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:36.436 22:32:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:36.436 22:32:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:36.436 22:32:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:36.436 22:32:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:36.436 22:32:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:36.436 Cannot find device "nvmf_tgt_br" 00:12:36.436 22:32:37 -- nvmf/common.sh@154 -- # true 00:12:36.436 22:32:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:36.436 Cannot find device "nvmf_tgt_br2" 00:12:36.436 22:32:37 -- nvmf/common.sh@155 -- # true 00:12:36.436 22:32:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:36.436 22:32:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:36.436 Cannot find device "nvmf_tgt_br" 00:12:36.436 22:32:37 -- nvmf/common.sh@157 -- # true 00:12:36.436 22:32:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:36.436 Cannot find device "nvmf_tgt_br2" 00:12:36.436 22:32:37 -- nvmf/common.sh@158 -- # true 00:12:36.437 22:32:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:36.437 22:32:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:36.437 22:32:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:36.437 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:36.437 22:32:37 -- nvmf/common.sh@161 -- # true 00:12:36.437 22:32:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:36.437 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:36.437 22:32:37 -- nvmf/common.sh@162 -- # true 00:12:36.437 22:32:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:36.437 22:32:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:36.437 22:32:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:36.437 22:32:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:36.696 22:32:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:36.696 22:32:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:36.696 22:32:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:36.696 22:32:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:36.696 22:32:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:36.696 22:32:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:36.696 22:32:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:36.696 22:32:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:36.696 22:32:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:36.696 22:32:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:36.696 22:32:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:36.696 22:32:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:36.696 22:32:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:36.696 22:32:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:36.696 22:32:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:36.696 22:32:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:36.696 22:32:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:36.696 22:32:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:36.696 22:32:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:36.696 22:32:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:36.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:12:36.696 00:12:36.696 --- 10.0.0.2 ping statistics --- 00:12:36.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.696 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:12:36.696 22:32:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:36.696 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:36.696 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:12:36.696 00:12:36.696 --- 10.0.0.3 ping statistics --- 00:12:36.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.696 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:12:36.696 22:32:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:36.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:12:36.696 00:12:36.696 --- 10.0.0.1 ping statistics --- 00:12:36.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.696 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:12:36.696 22:32:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.696 22:32:37 -- nvmf/common.sh@421 -- # return 0 00:12:36.696 22:32:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:36.696 22:32:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.696 22:32:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:36.696 22:32:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:36.696 22:32:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.696 22:32:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:36.696 22:32:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:36.696 22:32:37 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:36.696 22:32:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:36.696 22:32:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:36.696 22:32:37 -- common/autotest_common.sh@10 -- # set +x 00:12:36.696 22:32:37 -- nvmf/common.sh@469 -- # nvmfpid=79109 00:12:36.696 22:32:37 -- nvmf/common.sh@470 -- # waitforlisten 79109 00:12:36.696 22:32:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:36.696 22:32:37 -- common/autotest_common.sh@829 -- # '[' -z 79109 ']' 00:12:36.696 22:32:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.696 22:32:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:36.696 22:32:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.696 22:32:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:36.696 22:32:37 -- common/autotest_common.sh@10 -- # set +x 00:12:36.696 [2024-11-20 22:32:37.411548] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:36.696 [2024-11-20 22:32:37.411643] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.955 [2024-11-20 22:32:37.550414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:36.955 [2024-11-20 22:32:37.609439] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:36.955 [2024-11-20 22:32:37.609830] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.955 [2024-11-20 22:32:37.609885] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.955 [2024-11-20 22:32:37.610011] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.955 [2024-11-20 22:32:37.610503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.955 [2024-11-20 22:32:37.610584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.955 [2024-11-20 22:32:37.610588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.891 22:32:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:37.891 22:32:38 -- common/autotest_common.sh@862 -- # return 0 00:12:37.891 22:32:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:37.891 22:32:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:37.891 22:32:38 -- common/autotest_common.sh@10 -- # set +x 00:12:37.891 22:32:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.891 22:32:38 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:37.891 22:32:38 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:38.148 [2024-11-20 22:32:38.753475] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:38.148 22:32:38 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:38.406 22:32:38 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.673 [2024-11-20 22:32:39.176138] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.673 22:32:39 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:38.673 22:32:39 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:38.934 Malloc0 00:12:38.934 22:32:39 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:39.192 Delay0 00:12:39.192 22:32:39 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.450 22:32:40 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:39.708 NULL1 00:12:39.708 22:32:40 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:39.967 22:32:40 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:39.967 22:32:40 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=79240 00:12:39.967 22:32:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:12:39.967 22:32:40 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.343 Read completed with error (sct=0, sc=11) 00:12:41.343 22:32:41 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:41.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:41.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:41.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:41.343 22:32:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:41.343 22:32:41 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:41.602 true 00:12:41.602 22:32:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:12:41.602 22:32:42 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.536 22:32:42 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.536 22:32:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:42.536 22:32:43 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:42.795 true 00:12:42.795 22:32:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:12:42.795 22:32:43 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.053 22:32:43 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:43.312 22:32:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:43.312 22:32:43 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:43.312 true 00:12:43.312 22:32:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:12:43.312 22:32:44 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.688 22:32:44 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.688 22:32:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:44.688 22:32:45 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:44.688 true 00:12:44.688 22:32:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:12:44.688 22:32:45 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.947 22:32:45 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:45.205 22:32:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:45.205 22:32:45 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:45.464 true 00:12:45.464 22:32:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:12:45.464 22:32:46 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.400 22:32:46 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.658 22:32:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:46.658 22:32:47 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:46.917 true 00:12:46.917 22:32:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:12:46.917 22:32:47 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.176 22:32:47 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:47.176 22:32:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:47.176 22:32:47 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:47.434 true 00:12:47.435 22:32:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:12:47.435 22:32:48 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.371 22:32:49 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:48.635 22:32:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:48.635 22:32:49 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:48.896 true 00:12:48.896 22:32:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:12:48.896 22:32:49 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.896 22:32:49 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.155 22:32:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:49.155 22:32:49 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:49.413 true 00:12:49.413 22:32:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:12:49.413 22:32:50 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.350 22:32:51 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.608 22:32:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:50.608 22:32:51 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:50.867 true 00:12:50.867 22:32:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:12:50.867 22:32:51 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.125 22:32:51 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:51.384 22:32:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:51.384 22:32:51 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:51.642 true 00:12:51.642 22:32:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:12:51.642 22:32:52 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.578 22:32:53 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.578 22:32:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:52.578 22:32:53 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:52.836 true 00:12:52.836 22:32:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:12:52.836 22:32:53 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.095 22:32:53 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.362 22:32:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:53.362 22:32:53 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:53.645 true 00:12:53.645 22:32:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:12:53.645 22:32:54 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.599 22:32:55 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:54.599 22:32:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:54.599 22:32:55 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:54.858 true 00:12:54.858 22:32:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:12:54.858 22:32:55 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.117 22:32:55 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.375 22:32:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:55.375 22:32:55 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:55.375 true 00:12:55.375 22:32:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:12:55.375 22:32:56 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.752 22:32:57 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.752 22:32:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:56.752 22:32:57 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:57.011 true 00:12:57.011 22:32:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:12:57.011 22:32:57 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.270 22:32:57 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.529 22:32:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:57.529 22:32:58 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:57.788 true 00:12:57.788 22:32:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:12:57.788 22:32:58 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.046 22:32:58 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:58.046 22:32:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:58.046 22:32:58 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:58.304 true 00:12:58.304 22:32:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:12:58.304 22:32:58 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.685 22:33:00 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.685 22:33:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:59.685 22:33:00 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:59.944 true 00:12:59.944 22:33:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:12:59.944 22:33:00 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.879 22:33:01 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.138 22:33:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:01.138 22:33:01 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:01.397 true 00:13:01.397 22:33:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:13:01.397 22:33:01 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.397 22:33:02 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.656 22:33:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:01.656 22:33:02 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:01.914 true 00:13:01.915 22:33:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:13:01.915 22:33:02 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.173 22:33:02 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:02.432 22:33:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:02.432 22:33:03 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:02.690 true 00:13:02.690 22:33:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:13:02.690 22:33:03 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.066 22:33:04 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.066 22:33:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:04.066 22:33:04 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:04.324 true 00:13:04.324 22:33:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:13:04.324 22:33:04 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.260 22:33:05 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.260 22:33:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:05.260 22:33:05 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:05.518 true 00:13:05.518 22:33:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:13:05.518 22:33:06 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.777 22:33:06 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.777 22:33:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:05.777 22:33:06 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:06.035 true 00:13:06.035 22:33:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:13:06.035 22:33:06 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.970 22:33:07 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.228 22:33:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:07.228 22:33:07 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:07.487 true 00:13:07.487 22:33:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:13:07.487 22:33:08 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.746 22:33:08 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.005 22:33:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:08.005 22:33:08 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:08.263 true 00:13:08.263 22:33:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:13:08.263 22:33:08 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.198 22:33:09 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:09.198 22:33:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:09.198 22:33:09 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:09.457 true 00:13:09.457 22:33:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:13:09.457 22:33:10 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.715 22:33:10 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.974 22:33:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:09.974 22:33:10 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:10.233 true 00:13:10.233 22:33:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:13:10.233 22:33:10 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.170 Initializing NVMe Controllers 00:13:11.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:11.170 Controller IO queue size 128, less than required. 00:13:11.170 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:11.170 Controller IO queue size 128, less than required. 00:13:11.170 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:11.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:11.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:11.170 Initialization complete. Launching workers. 00:13:11.170 ======================================================== 00:13:11.170 Latency(us) 00:13:11.170 Device Information : IOPS MiB/s Average min max 00:13:11.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 603.07 0.29 123770.29 3842.05 1034484.06 00:13:11.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14190.07 6.93 9020.31 2617.34 508427.21 00:13:11.170 ======================================================== 00:13:11.170 Total : 14793.13 7.22 13698.28 2617.34 1034484.06 00:13:11.170 00:13:11.170 22:33:11 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.170 22:33:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:11.170 22:33:11 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:11.429 true 00:13:11.429 22:33:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79240 00:13:11.429 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (79240) - No such process 00:13:11.429 22:33:12 -- target/ns_hotplug_stress.sh@53 -- # wait 79240 00:13:11.429 22:33:12 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.687 22:33:12 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:11.946 22:33:12 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:11.946 22:33:12 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:11.946 22:33:12 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:11.946 22:33:12 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:11.946 22:33:12 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:12.205 null0 00:13:12.205 22:33:12 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:12.205 22:33:12 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:12.205 22:33:12 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:12.463 null1 00:13:12.463 22:33:13 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:12.463 22:33:13 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:12.463 22:33:13 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:12.722 null2 00:13:12.722 22:33:13 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:12.722 22:33:13 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:12.722 22:33:13 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:12.981 null3 00:13:12.981 22:33:13 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:12.981 22:33:13 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:12.981 22:33:13 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:12.981 null4 00:13:12.981 22:33:13 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:12.981 22:33:13 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:12.981 22:33:13 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:13.240 null5 00:13:13.240 22:33:13 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:13.240 22:33:13 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:13.240 22:33:13 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:13.499 null6 00:13:13.499 22:33:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:13.499 22:33:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:13.499 22:33:14 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:13.759 null7 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@66 -- # wait 80299 80300 80303 80304 80306 80309 80310 80314 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.759 22:33:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:14.017 22:33:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:14.017 22:33:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:14.017 22:33:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:14.017 22:33:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.017 22:33:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:14.017 22:33:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.017 22:33:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:14.276 22:33:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:14.276 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.276 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.276 22:33:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:14.276 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.276 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.276 22:33:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:14.276 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.276 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.276 22:33:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:14.276 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.276 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.276 22:33:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:14.276 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.276 22:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.276 22:33:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:14.535 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.535 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.535 22:33:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:14.535 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.535 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.535 22:33:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:14.535 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.535 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.535 22:33:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:14.535 22:33:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:14.535 22:33:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.535 22:33:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.535 22:33:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:14.535 22:33:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:14.535 22:33:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:14.535 22:33:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:14.794 22:33:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:14.794 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.794 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.794 22:33:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:14.794 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.794 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.794 22:33:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:14.794 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.794 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.794 22:33:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:14.794 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.794 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.794 22:33:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:14.794 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.794 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.794 22:33:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:14.794 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.794 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.794 22:33:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:15.053 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.053 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.053 22:33:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:15.053 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.053 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.053 22:33:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:15.053 22:33:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.053 22:33:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:15.053 22:33:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:15.053 22:33:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:15.053 22:33:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:15.053 22:33:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:15.312 22:33:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.312 22:33:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:15.312 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.312 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.312 22:33:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:15.312 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.312 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.312 22:33:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:15.312 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.312 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.312 22:33:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:15.312 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.312 22:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.312 22:33:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:15.312 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.312 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.312 22:33:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:15.571 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.571 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.571 22:33:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:15.571 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.571 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.571 22:33:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:15.571 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.571 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.571 22:33:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:15.571 22:33:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.571 22:33:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:15.571 22:33:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:15.571 22:33:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:15.571 22:33:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.831 22:33:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:15.831 22:33:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:15.831 22:33:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:15.831 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.831 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.831 22:33:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:15.831 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.831 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.831 22:33:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:15.831 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.831 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.831 22:33:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:15.831 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.831 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.831 22:33:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:15.831 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.831 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.831 22:33:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:16.090 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.090 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.090 22:33:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:16.090 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.090 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.090 22:33:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:16.090 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.090 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.090 22:33:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:16.090 22:33:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:16.090 22:33:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:16.090 22:33:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:16.090 22:33:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.090 22:33:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:16.090 22:33:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:16.349 22:33:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:16.349 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.349 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.349 22:33:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:16.349 22:33:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:16.349 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.349 22:33:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.349 22:33:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:16.349 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.349 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.349 22:33:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:16.349 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.349 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.349 22:33:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:16.349 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.349 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.349 22:33:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:16.349 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.349 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.349 22:33:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:16.608 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.608 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.608 22:33:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:16.608 22:33:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:16.608 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.608 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.608 22:33:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:16.608 22:33:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:16.608 22:33:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:16.608 22:33:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:16.608 22:33:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:16.608 22:33:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:16.867 22:33:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.867 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.867 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.867 22:33:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:16.867 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.867 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.867 22:33:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:16.867 22:33:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:16.867 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.867 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.867 22:33:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:16.867 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.867 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.867 22:33:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:16.867 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.867 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.867 22:33:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:17.126 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.126 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.126 22:33:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:17.126 22:33:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:17.126 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.126 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.126 22:33:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:17.126 22:33:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:17.126 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.126 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.126 22:33:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:17.126 22:33:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:17.126 22:33:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:17.126 22:33:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:17.385 22:33:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:17.385 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.385 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.385 22:33:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:17.385 22:33:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.385 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.385 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.385 22:33:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:17.385 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.385 22:33:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.385 22:33:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:17.385 22:33:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:17.385 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.385 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.386 22:33:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:17.386 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.386 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.386 22:33:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:17.386 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.386 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.386 22:33:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:17.644 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.644 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.644 22:33:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:17.644 22:33:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:17.644 22:33:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:17.644 22:33:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:17.644 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.644 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.644 22:33:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:17.644 22:33:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:17.644 22:33:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:17.644 22:33:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:17.902 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.902 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.902 22:33:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:17.902 22:33:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.902 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.902 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.902 22:33:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:17.902 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.902 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.902 22:33:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:17.902 22:33:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:17.902 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.902 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.902 22:33:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:17.902 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.902 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.902 22:33:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:17.902 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.902 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.902 22:33:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:18.161 22:33:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:18.161 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.161 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.161 22:33:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:18.161 22:33:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:18.161 22:33:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:18.161 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.161 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.161 22:33:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:18.161 22:33:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:18.161 22:33:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:18.161 22:33:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:18.419 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.419 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.419 22:33:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:18.419 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.419 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.419 22:33:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:18.419 22:33:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.419 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.419 22:33:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.419 22:33:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:18.419 22:33:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:18.419 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.419 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.419 22:33:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:18.419 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.419 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.419 22:33:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:18.419 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.419 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.419 22:33:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:18.676 22:33:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:18.676 22:33:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:18.676 22:33:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:18.676 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.676 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.676 22:33:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:18.676 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.676 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.676 22:33:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:18.676 22:33:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:18.676 22:33:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:18.676 22:33:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:18.934 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.934 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.934 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.934 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.934 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.934 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.934 22:33:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.934 22:33:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:18.934 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.934 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.934 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.934 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.934 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.934 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.193 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.193 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.193 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.193 22:33:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.193 22:33:19 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:19.193 22:33:19 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:19.193 22:33:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:19.193 22:33:19 -- nvmf/common.sh@116 -- # sync 00:13:19.193 22:33:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:19.193 22:33:19 -- nvmf/common.sh@119 -- # set +e 00:13:19.193 22:33:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:19.193 22:33:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:19.193 rmmod nvme_tcp 00:13:19.193 rmmod nvme_fabrics 00:13:19.193 rmmod nvme_keyring 00:13:19.193 22:33:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:19.193 22:33:19 -- nvmf/common.sh@123 -- # set -e 00:13:19.193 22:33:19 -- nvmf/common.sh@124 -- # return 0 00:13:19.193 22:33:19 -- nvmf/common.sh@477 -- # '[' -n 79109 ']' 00:13:19.193 22:33:19 -- nvmf/common.sh@478 -- # killprocess 79109 00:13:19.193 22:33:19 -- common/autotest_common.sh@936 -- # '[' -z 79109 ']' 00:13:19.193 22:33:19 -- common/autotest_common.sh@940 -- # kill -0 79109 00:13:19.193 22:33:19 -- common/autotest_common.sh@941 -- # uname 00:13:19.193 22:33:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:19.193 22:33:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79109 00:13:19.193 killing process with pid 79109 00:13:19.193 22:33:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:19.193 22:33:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:19.193 22:33:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79109' 00:13:19.193 22:33:19 -- common/autotest_common.sh@955 -- # kill 79109 00:13:19.193 22:33:19 -- common/autotest_common.sh@960 -- # wait 79109 00:13:19.452 22:33:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:19.452 22:33:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:19.452 22:33:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:19.452 22:33:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:19.452 22:33:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:19.452 22:33:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.452 22:33:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.452 22:33:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.452 22:33:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:19.452 00:13:19.452 real 0m43.360s 00:13:19.452 user 3m25.035s 00:13:19.452 sys 0m12.217s 00:13:19.452 22:33:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:19.453 22:33:20 -- common/autotest_common.sh@10 -- # set +x 00:13:19.453 ************************************ 00:13:19.453 END TEST nvmf_ns_hotplug_stress 00:13:19.453 ************************************ 00:13:19.712 22:33:20 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:19.712 22:33:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:19.712 22:33:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:19.712 22:33:20 -- common/autotest_common.sh@10 -- # set +x 00:13:19.712 ************************************ 00:13:19.712 START TEST nvmf_connect_stress 00:13:19.712 ************************************ 00:13:19.712 22:33:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:19.712 * Looking for test storage... 00:13:19.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:19.712 22:33:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:19.712 22:33:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:19.712 22:33:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:19.712 22:33:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:19.712 22:33:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:19.712 22:33:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:19.712 22:33:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:19.712 22:33:20 -- scripts/common.sh@335 -- # IFS=.-: 00:13:19.712 22:33:20 -- scripts/common.sh@335 -- # read -ra ver1 00:13:19.712 22:33:20 -- scripts/common.sh@336 -- # IFS=.-: 00:13:19.712 22:33:20 -- scripts/common.sh@336 -- # read -ra ver2 00:13:19.712 22:33:20 -- scripts/common.sh@337 -- # local 'op=<' 00:13:19.712 22:33:20 -- scripts/common.sh@339 -- # ver1_l=2 00:13:19.712 22:33:20 -- scripts/common.sh@340 -- # ver2_l=1 00:13:19.712 22:33:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:19.712 22:33:20 -- scripts/common.sh@343 -- # case "$op" in 00:13:19.712 22:33:20 -- scripts/common.sh@344 -- # : 1 00:13:19.712 22:33:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:19.712 22:33:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:19.712 22:33:20 -- scripts/common.sh@364 -- # decimal 1 00:13:19.712 22:33:20 -- scripts/common.sh@352 -- # local d=1 00:13:19.712 22:33:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:19.712 22:33:20 -- scripts/common.sh@354 -- # echo 1 00:13:19.712 22:33:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:19.712 22:33:20 -- scripts/common.sh@365 -- # decimal 2 00:13:19.712 22:33:20 -- scripts/common.sh@352 -- # local d=2 00:13:19.712 22:33:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:19.712 22:33:20 -- scripts/common.sh@354 -- # echo 2 00:13:19.712 22:33:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:19.712 22:33:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:19.712 22:33:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:19.712 22:33:20 -- scripts/common.sh@367 -- # return 0 00:13:19.712 22:33:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:19.712 22:33:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:19.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.712 --rc genhtml_branch_coverage=1 00:13:19.712 --rc genhtml_function_coverage=1 00:13:19.712 --rc genhtml_legend=1 00:13:19.712 --rc geninfo_all_blocks=1 00:13:19.712 --rc geninfo_unexecuted_blocks=1 00:13:19.712 00:13:19.712 ' 00:13:19.712 22:33:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:19.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.712 --rc genhtml_branch_coverage=1 00:13:19.712 --rc genhtml_function_coverage=1 00:13:19.712 --rc genhtml_legend=1 00:13:19.712 --rc geninfo_all_blocks=1 00:13:19.712 --rc geninfo_unexecuted_blocks=1 00:13:19.712 00:13:19.712 ' 00:13:19.712 22:33:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:19.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.712 --rc genhtml_branch_coverage=1 00:13:19.712 --rc genhtml_function_coverage=1 00:13:19.712 --rc genhtml_legend=1 00:13:19.712 --rc geninfo_all_blocks=1 00:13:19.712 --rc geninfo_unexecuted_blocks=1 00:13:19.712 00:13:19.712 ' 00:13:19.712 22:33:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:19.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.712 --rc genhtml_branch_coverage=1 00:13:19.712 --rc genhtml_function_coverage=1 00:13:19.712 --rc genhtml_legend=1 00:13:19.712 --rc geninfo_all_blocks=1 00:13:19.712 --rc geninfo_unexecuted_blocks=1 00:13:19.712 00:13:19.712 ' 00:13:19.712 22:33:20 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:19.712 22:33:20 -- nvmf/common.sh@7 -- # uname -s 00:13:19.712 22:33:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.712 22:33:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.712 22:33:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.712 22:33:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.712 22:33:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.712 22:33:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.712 22:33:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.713 22:33:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.713 22:33:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.713 22:33:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.713 22:33:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:13:19.713 22:33:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:13:19.713 22:33:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.713 22:33:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.713 22:33:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:19.713 22:33:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:19.713 22:33:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.713 22:33:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.713 22:33:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.713 22:33:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.713 22:33:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.713 22:33:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.713 22:33:20 -- paths/export.sh@5 -- # export PATH 00:13:19.713 22:33:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.713 22:33:20 -- nvmf/common.sh@46 -- # : 0 00:13:19.713 22:33:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:19.713 22:33:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:19.713 22:33:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:19.713 22:33:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.713 22:33:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.713 22:33:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:19.713 22:33:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:19.713 22:33:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:19.713 22:33:20 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:19.713 22:33:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:19.713 22:33:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.713 22:33:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:19.713 22:33:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:19.713 22:33:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:19.713 22:33:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.713 22:33:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.713 22:33:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.713 22:33:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:19.713 22:33:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:19.713 22:33:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:19.713 22:33:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:19.713 22:33:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:19.713 22:33:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:19.713 22:33:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:19.713 22:33:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:19.713 22:33:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:19.713 22:33:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:19.713 22:33:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:19.713 22:33:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:19.713 22:33:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:19.713 22:33:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:19.713 22:33:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:19.713 22:33:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:19.713 22:33:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:19.713 22:33:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:19.713 22:33:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:19.713 22:33:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:19.713 Cannot find device "nvmf_tgt_br" 00:13:19.713 22:33:20 -- nvmf/common.sh@154 -- # true 00:13:19.713 22:33:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:19.713 Cannot find device "nvmf_tgt_br2" 00:13:19.713 22:33:20 -- nvmf/common.sh@155 -- # true 00:13:19.713 22:33:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:19.713 22:33:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:19.713 Cannot find device "nvmf_tgt_br" 00:13:19.972 22:33:20 -- nvmf/common.sh@157 -- # true 00:13:19.972 22:33:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:19.972 Cannot find device "nvmf_tgt_br2" 00:13:19.972 22:33:20 -- nvmf/common.sh@158 -- # true 00:13:19.972 22:33:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:19.972 22:33:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:19.972 22:33:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:19.972 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:19.972 22:33:20 -- nvmf/common.sh@161 -- # true 00:13:19.972 22:33:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:19.972 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:19.972 22:33:20 -- nvmf/common.sh@162 -- # true 00:13:19.972 22:33:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:19.972 22:33:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:19.972 22:33:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:19.972 22:33:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:19.972 22:33:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:19.972 22:33:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:19.972 22:33:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:19.972 22:33:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:19.972 22:33:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:19.972 22:33:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:19.972 22:33:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:19.972 22:33:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:19.972 22:33:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:19.972 22:33:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:19.972 22:33:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:19.972 22:33:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:19.972 22:33:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:19.972 22:33:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:19.972 22:33:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:19.972 22:33:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:20.231 22:33:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:20.231 22:33:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:20.231 22:33:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:20.231 22:33:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:20.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:13:20.231 00:13:20.231 --- 10.0.0.2 ping statistics --- 00:13:20.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.231 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:13:20.231 22:33:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:20.231 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:20.231 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:13:20.231 00:13:20.231 --- 10.0.0.3 ping statistics --- 00:13:20.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.231 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:13:20.231 22:33:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:20.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:13:20.231 00:13:20.231 --- 10.0.0.1 ping statistics --- 00:13:20.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.231 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:13:20.231 22:33:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.231 22:33:20 -- nvmf/common.sh@421 -- # return 0 00:13:20.231 22:33:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:20.231 22:33:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.231 22:33:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:20.231 22:33:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:20.231 22:33:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.231 22:33:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:20.231 22:33:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:20.231 22:33:20 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:20.231 22:33:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:20.231 22:33:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:20.231 22:33:20 -- common/autotest_common.sh@10 -- # set +x 00:13:20.231 22:33:20 -- nvmf/common.sh@469 -- # nvmfpid=81635 00:13:20.231 22:33:20 -- nvmf/common.sh@470 -- # waitforlisten 81635 00:13:20.231 22:33:20 -- common/autotest_common.sh@829 -- # '[' -z 81635 ']' 00:13:20.231 22:33:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:20.231 22:33:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.232 22:33:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:20.232 22:33:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.232 22:33:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:20.232 22:33:20 -- common/autotest_common.sh@10 -- # set +x 00:13:20.232 [2024-11-20 22:33:20.822486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:20.232 [2024-11-20 22:33:20.822575] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.232 [2024-11-20 22:33:20.958681] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:20.515 [2024-11-20 22:33:21.039955] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:20.515 [2024-11-20 22:33:21.040454] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.515 [2024-11-20 22:33:21.040507] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.515 [2024-11-20 22:33:21.040640] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.515 [2024-11-20 22:33:21.041315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.515 [2024-11-20 22:33:21.041461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.515 [2024-11-20 22:33:21.041470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.163 22:33:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:21.163 22:33:21 -- common/autotest_common.sh@862 -- # return 0 00:13:21.163 22:33:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:21.163 22:33:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:21.163 22:33:21 -- common/autotest_common.sh@10 -- # set +x 00:13:21.435 22:33:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.435 22:33:21 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:21.435 22:33:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.435 22:33:21 -- common/autotest_common.sh@10 -- # set +x 00:13:21.435 [2024-11-20 22:33:21.899372] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.435 22:33:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.435 22:33:21 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:21.435 22:33:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.435 22:33:21 -- common/autotest_common.sh@10 -- # set +x 00:13:21.435 22:33:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.435 22:33:21 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.435 22:33:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.435 22:33:21 -- common/autotest_common.sh@10 -- # set +x 00:13:21.435 [2024-11-20 22:33:21.921458] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.435 22:33:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.435 22:33:21 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:21.435 22:33:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.435 22:33:21 -- common/autotest_common.sh@10 -- # set +x 00:13:21.435 NULL1 00:13:21.435 22:33:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.435 22:33:21 -- target/connect_stress.sh@21 -- # PERF_PID=81687 00:13:21.435 22:33:21 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:21.435 22:33:21 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:21.435 22:33:21 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:21.435 22:33:21 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:21.435 22:33:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.435 22:33:21 -- target/connect_stress.sh@28 -- # cat 00:13:21.435 22:33:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.435 22:33:21 -- target/connect_stress.sh@28 -- # cat 00:13:21.435 22:33:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.435 22:33:21 -- target/connect_stress.sh@28 -- # cat 00:13:21.435 22:33:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.435 22:33:21 -- target/connect_stress.sh@28 -- # cat 00:13:21.435 22:33:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.435 22:33:21 -- target/connect_stress.sh@28 -- # cat 00:13:21.435 22:33:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.435 22:33:21 -- target/connect_stress.sh@28 -- # cat 00:13:21.435 22:33:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.435 22:33:21 -- target/connect_stress.sh@28 -- # cat 00:13:21.435 22:33:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.435 22:33:21 -- target/connect_stress.sh@28 -- # cat 00:13:21.435 22:33:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.435 22:33:21 -- target/connect_stress.sh@28 -- # cat 00:13:21.435 22:33:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.435 22:33:21 -- target/connect_stress.sh@28 -- # cat 00:13:21.435 22:33:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.435 22:33:21 -- target/connect_stress.sh@28 -- # cat 00:13:21.435 22:33:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.435 22:33:21 -- target/connect_stress.sh@28 -- # cat 00:13:21.435 22:33:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.435 22:33:21 -- target/connect_stress.sh@28 -- # cat 00:13:21.435 22:33:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.435 22:33:21 -- target/connect_stress.sh@28 -- # cat 00:13:21.435 22:33:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.435 22:33:21 -- target/connect_stress.sh@28 -- # cat 00:13:21.435 22:33:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.435 22:33:21 -- target/connect_stress.sh@28 -- # cat 00:13:21.435 22:33:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.435 22:33:21 -- target/connect_stress.sh@28 -- # cat 00:13:21.435 22:33:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.435 22:33:21 -- target/connect_stress.sh@28 -- # cat 00:13:21.435 22:33:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.435 22:33:22 -- target/connect_stress.sh@28 -- # cat 00:13:21.435 22:33:22 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.435 22:33:22 -- target/connect_stress.sh@28 -- # cat 00:13:21.435 22:33:22 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:21.435 22:33:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.435 22:33:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.435 22:33:22 -- common/autotest_common.sh@10 -- # set +x 00:13:21.694 22:33:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.694 22:33:22 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:21.694 22:33:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.694 22:33:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.694 22:33:22 -- common/autotest_common.sh@10 -- # set +x 00:13:21.952 22:33:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.952 22:33:22 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:21.952 22:33:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.952 22:33:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.952 22:33:22 -- common/autotest_common.sh@10 -- # set +x 00:13:22.518 22:33:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.518 22:33:22 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:22.518 22:33:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.518 22:33:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.518 22:33:22 -- common/autotest_common.sh@10 -- # set +x 00:13:22.776 22:33:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.776 22:33:23 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:22.776 22:33:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.776 22:33:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.776 22:33:23 -- common/autotest_common.sh@10 -- # set +x 00:13:23.034 22:33:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.034 22:33:23 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:23.034 22:33:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.034 22:33:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.034 22:33:23 -- common/autotest_common.sh@10 -- # set +x 00:13:23.292 22:33:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.292 22:33:23 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:23.292 22:33:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.292 22:33:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.292 22:33:23 -- common/autotest_common.sh@10 -- # set +x 00:13:23.553 22:33:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.553 22:33:24 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:23.553 22:33:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.553 22:33:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.553 22:33:24 -- common/autotest_common.sh@10 -- # set +x 00:13:24.120 22:33:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.120 22:33:24 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:24.120 22:33:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.120 22:33:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.120 22:33:24 -- common/autotest_common.sh@10 -- # set +x 00:13:24.378 22:33:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.379 22:33:24 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:24.379 22:33:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.379 22:33:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.379 22:33:24 -- common/autotest_common.sh@10 -- # set +x 00:13:24.637 22:33:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.637 22:33:25 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:24.637 22:33:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.637 22:33:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.637 22:33:25 -- common/autotest_common.sh@10 -- # set +x 00:13:24.896 22:33:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.896 22:33:25 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:24.896 22:33:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.896 22:33:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.896 22:33:25 -- common/autotest_common.sh@10 -- # set +x 00:13:25.464 22:33:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.464 22:33:25 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:25.464 22:33:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.464 22:33:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.464 22:33:25 -- common/autotest_common.sh@10 -- # set +x 00:13:25.722 22:33:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.722 22:33:26 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:25.722 22:33:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.722 22:33:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.722 22:33:26 -- common/autotest_common.sh@10 -- # set +x 00:13:25.979 22:33:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.979 22:33:26 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:25.979 22:33:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.979 22:33:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.979 22:33:26 -- common/autotest_common.sh@10 -- # set +x 00:13:26.237 22:33:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.237 22:33:26 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:26.237 22:33:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.237 22:33:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.237 22:33:26 -- common/autotest_common.sh@10 -- # set +x 00:13:26.494 22:33:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.494 22:33:27 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:26.495 22:33:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.495 22:33:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.495 22:33:27 -- common/autotest_common.sh@10 -- # set +x 00:13:27.059 22:33:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.059 22:33:27 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:27.059 22:33:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.059 22:33:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.059 22:33:27 -- common/autotest_common.sh@10 -- # set +x 00:13:27.317 22:33:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.317 22:33:27 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:27.317 22:33:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.317 22:33:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.317 22:33:27 -- common/autotest_common.sh@10 -- # set +x 00:13:27.574 22:33:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.574 22:33:28 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:27.574 22:33:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.574 22:33:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.574 22:33:28 -- common/autotest_common.sh@10 -- # set +x 00:13:27.833 22:33:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.833 22:33:28 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:27.833 22:33:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.833 22:33:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.833 22:33:28 -- common/autotest_common.sh@10 -- # set +x 00:13:28.091 22:33:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.091 22:33:28 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:28.091 22:33:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.091 22:33:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.091 22:33:28 -- common/autotest_common.sh@10 -- # set +x 00:13:28.655 22:33:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.655 22:33:29 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:28.655 22:33:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.655 22:33:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.655 22:33:29 -- common/autotest_common.sh@10 -- # set +x 00:13:28.912 22:33:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.912 22:33:29 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:28.913 22:33:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.913 22:33:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.913 22:33:29 -- common/autotest_common.sh@10 -- # set +x 00:13:29.171 22:33:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.171 22:33:29 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:29.171 22:33:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.171 22:33:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.171 22:33:29 -- common/autotest_common.sh@10 -- # set +x 00:13:29.429 22:33:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.429 22:33:30 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:29.429 22:33:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.430 22:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.430 22:33:30 -- common/autotest_common.sh@10 -- # set +x 00:13:29.688 22:33:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.688 22:33:30 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:29.688 22:33:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.688 22:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.688 22:33:30 -- common/autotest_common.sh@10 -- # set +x 00:13:30.255 22:33:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.255 22:33:30 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:30.255 22:33:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.255 22:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.255 22:33:30 -- common/autotest_common.sh@10 -- # set +x 00:13:30.513 22:33:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.513 22:33:31 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:30.513 22:33:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.513 22:33:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.513 22:33:31 -- common/autotest_common.sh@10 -- # set +x 00:13:30.771 22:33:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.771 22:33:31 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:30.771 22:33:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.771 22:33:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.771 22:33:31 -- common/autotest_common.sh@10 -- # set +x 00:13:31.030 22:33:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.030 22:33:31 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:31.030 22:33:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.030 22:33:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.030 22:33:31 -- common/autotest_common.sh@10 -- # set +x 00:13:31.289 22:33:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.289 22:33:32 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:31.289 22:33:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.289 22:33:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.289 22:33:32 -- common/autotest_common.sh@10 -- # set +x 00:13:31.547 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:31.806 22:33:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.806 22:33:32 -- target/connect_stress.sh@34 -- # kill -0 81687 00:13:31.806 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (81687) - No such process 00:13:31.806 22:33:32 -- target/connect_stress.sh@38 -- # wait 81687 00:13:31.806 22:33:32 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:31.806 22:33:32 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:31.806 22:33:32 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:31.806 22:33:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:31.806 22:33:32 -- nvmf/common.sh@116 -- # sync 00:13:31.806 22:33:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:31.806 22:33:32 -- nvmf/common.sh@119 -- # set +e 00:13:31.806 22:33:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:31.806 22:33:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:31.806 rmmod nvme_tcp 00:13:31.806 rmmod nvme_fabrics 00:13:31.806 rmmod nvme_keyring 00:13:31.806 22:33:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:31.806 22:33:32 -- nvmf/common.sh@123 -- # set -e 00:13:31.806 22:33:32 -- nvmf/common.sh@124 -- # return 0 00:13:31.806 22:33:32 -- nvmf/common.sh@477 -- # '[' -n 81635 ']' 00:13:31.806 22:33:32 -- nvmf/common.sh@478 -- # killprocess 81635 00:13:31.806 22:33:32 -- common/autotest_common.sh@936 -- # '[' -z 81635 ']' 00:13:31.806 22:33:32 -- common/autotest_common.sh@940 -- # kill -0 81635 00:13:31.806 22:33:32 -- common/autotest_common.sh@941 -- # uname 00:13:31.806 22:33:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:31.806 22:33:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81635 00:13:31.806 22:33:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:31.806 killing process with pid 81635 00:13:31.806 22:33:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:31.806 22:33:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81635' 00:13:31.806 22:33:32 -- common/autotest_common.sh@955 -- # kill 81635 00:13:31.806 22:33:32 -- common/autotest_common.sh@960 -- # wait 81635 00:13:32.065 22:33:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:32.065 22:33:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:32.065 22:33:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:32.065 22:33:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:32.065 22:33:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:32.065 22:33:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.065 22:33:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.065 22:33:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.065 22:33:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:32.065 00:13:32.065 real 0m12.569s 00:13:32.065 user 0m41.854s 00:13:32.065 sys 0m3.122s 00:13:32.065 22:33:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:32.065 22:33:32 -- common/autotest_common.sh@10 -- # set +x 00:13:32.065 ************************************ 00:13:32.065 END TEST nvmf_connect_stress 00:13:32.065 ************************************ 00:13:32.323 22:33:32 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:32.323 22:33:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:32.323 22:33:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:32.323 22:33:32 -- common/autotest_common.sh@10 -- # set +x 00:13:32.323 ************************************ 00:13:32.323 START TEST nvmf_fused_ordering 00:13:32.323 ************************************ 00:13:32.323 22:33:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:32.323 * Looking for test storage... 00:13:32.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:32.323 22:33:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:32.323 22:33:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:32.323 22:33:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:32.323 22:33:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:32.323 22:33:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:32.323 22:33:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:32.323 22:33:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:32.323 22:33:32 -- scripts/common.sh@335 -- # IFS=.-: 00:13:32.323 22:33:32 -- scripts/common.sh@335 -- # read -ra ver1 00:13:32.323 22:33:32 -- scripts/common.sh@336 -- # IFS=.-: 00:13:32.323 22:33:32 -- scripts/common.sh@336 -- # read -ra ver2 00:13:32.323 22:33:32 -- scripts/common.sh@337 -- # local 'op=<' 00:13:32.323 22:33:32 -- scripts/common.sh@339 -- # ver1_l=2 00:13:32.323 22:33:32 -- scripts/common.sh@340 -- # ver2_l=1 00:13:32.324 22:33:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:32.324 22:33:32 -- scripts/common.sh@343 -- # case "$op" in 00:13:32.324 22:33:32 -- scripts/common.sh@344 -- # : 1 00:13:32.324 22:33:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:32.324 22:33:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:32.324 22:33:32 -- scripts/common.sh@364 -- # decimal 1 00:13:32.324 22:33:32 -- scripts/common.sh@352 -- # local d=1 00:13:32.324 22:33:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:32.324 22:33:32 -- scripts/common.sh@354 -- # echo 1 00:13:32.324 22:33:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:32.324 22:33:32 -- scripts/common.sh@365 -- # decimal 2 00:13:32.324 22:33:32 -- scripts/common.sh@352 -- # local d=2 00:13:32.324 22:33:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:32.324 22:33:32 -- scripts/common.sh@354 -- # echo 2 00:13:32.324 22:33:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:32.324 22:33:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:32.324 22:33:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:32.324 22:33:32 -- scripts/common.sh@367 -- # return 0 00:13:32.324 22:33:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:32.324 22:33:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:32.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.324 --rc genhtml_branch_coverage=1 00:13:32.324 --rc genhtml_function_coverage=1 00:13:32.324 --rc genhtml_legend=1 00:13:32.324 --rc geninfo_all_blocks=1 00:13:32.324 --rc geninfo_unexecuted_blocks=1 00:13:32.324 00:13:32.324 ' 00:13:32.324 22:33:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:32.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.324 --rc genhtml_branch_coverage=1 00:13:32.324 --rc genhtml_function_coverage=1 00:13:32.324 --rc genhtml_legend=1 00:13:32.324 --rc geninfo_all_blocks=1 00:13:32.324 --rc geninfo_unexecuted_blocks=1 00:13:32.324 00:13:32.324 ' 00:13:32.324 22:33:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:32.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.324 --rc genhtml_branch_coverage=1 00:13:32.324 --rc genhtml_function_coverage=1 00:13:32.324 --rc genhtml_legend=1 00:13:32.324 --rc geninfo_all_blocks=1 00:13:32.324 --rc geninfo_unexecuted_blocks=1 00:13:32.324 00:13:32.324 ' 00:13:32.324 22:33:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:32.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.324 --rc genhtml_branch_coverage=1 00:13:32.324 --rc genhtml_function_coverage=1 00:13:32.324 --rc genhtml_legend=1 00:13:32.324 --rc geninfo_all_blocks=1 00:13:32.324 --rc geninfo_unexecuted_blocks=1 00:13:32.324 00:13:32.324 ' 00:13:32.324 22:33:32 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:32.324 22:33:32 -- nvmf/common.sh@7 -- # uname -s 00:13:32.324 22:33:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.324 22:33:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.324 22:33:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.324 22:33:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.324 22:33:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.324 22:33:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.324 22:33:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.324 22:33:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.324 22:33:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.324 22:33:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.324 22:33:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:13:32.324 22:33:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:13:32.324 22:33:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.324 22:33:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.324 22:33:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:32.324 22:33:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:32.324 22:33:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.324 22:33:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.324 22:33:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.324 22:33:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.324 22:33:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.324 22:33:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.324 22:33:32 -- paths/export.sh@5 -- # export PATH 00:13:32.324 22:33:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.324 22:33:32 -- nvmf/common.sh@46 -- # : 0 00:13:32.324 22:33:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:32.324 22:33:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:32.324 22:33:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:32.324 22:33:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.324 22:33:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.324 22:33:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:32.324 22:33:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:32.324 22:33:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:32.324 22:33:33 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:32.324 22:33:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:32.324 22:33:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.324 22:33:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:32.324 22:33:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:32.324 22:33:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:32.324 22:33:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.324 22:33:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.324 22:33:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.324 22:33:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:32.324 22:33:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:32.324 22:33:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:32.324 22:33:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:32.324 22:33:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:32.324 22:33:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:32.324 22:33:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.324 22:33:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:32.324 22:33:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:32.324 22:33:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:32.324 22:33:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:32.324 22:33:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:32.324 22:33:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:32.324 22:33:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.324 22:33:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:32.324 22:33:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:32.324 22:33:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:32.324 22:33:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:32.324 22:33:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:32.324 22:33:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:32.324 Cannot find device "nvmf_tgt_br" 00:13:32.324 22:33:33 -- nvmf/common.sh@154 -- # true 00:13:32.324 22:33:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:32.324 Cannot find device "nvmf_tgt_br2" 00:13:32.324 22:33:33 -- nvmf/common.sh@155 -- # true 00:13:32.324 22:33:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:32.582 22:33:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:32.583 Cannot find device "nvmf_tgt_br" 00:13:32.583 22:33:33 -- nvmf/common.sh@157 -- # true 00:13:32.583 22:33:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:32.583 Cannot find device "nvmf_tgt_br2" 00:13:32.583 22:33:33 -- nvmf/common.sh@158 -- # true 00:13:32.583 22:33:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:32.583 22:33:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:32.583 22:33:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:32.583 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:32.583 22:33:33 -- nvmf/common.sh@161 -- # true 00:13:32.583 22:33:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:32.583 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:32.583 22:33:33 -- nvmf/common.sh@162 -- # true 00:13:32.583 22:33:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:32.583 22:33:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:32.583 22:33:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:32.583 22:33:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:32.583 22:33:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:32.583 22:33:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:32.583 22:33:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:32.583 22:33:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:32.583 22:33:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:32.583 22:33:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:32.583 22:33:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:32.583 22:33:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:32.583 22:33:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:32.583 22:33:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:32.583 22:33:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:32.583 22:33:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:32.583 22:33:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:32.583 22:33:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:32.583 22:33:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:32.583 22:33:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:32.583 22:33:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:32.583 22:33:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:32.583 22:33:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:32.583 22:33:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:32.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:13:32.583 00:13:32.583 --- 10.0.0.2 ping statistics --- 00:13:32.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.583 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:13:32.583 22:33:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:32.842 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:32.842 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:13:32.842 00:13:32.842 --- 10.0.0.3 ping statistics --- 00:13:32.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.842 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:13:32.842 22:33:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:32.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:13:32.842 00:13:32.842 --- 10.0.0.1 ping statistics --- 00:13:32.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.842 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:13:32.842 22:33:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.842 22:33:33 -- nvmf/common.sh@421 -- # return 0 00:13:32.842 22:33:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:32.842 22:33:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.842 22:33:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:32.842 22:33:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:32.842 22:33:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.842 22:33:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:32.842 22:33:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:32.842 22:33:33 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:32.842 22:33:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:32.842 22:33:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:32.842 22:33:33 -- common/autotest_common.sh@10 -- # set +x 00:13:32.842 22:33:33 -- nvmf/common.sh@469 -- # nvmfpid=82021 00:13:32.842 22:33:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:32.842 22:33:33 -- nvmf/common.sh@470 -- # waitforlisten 82021 00:13:32.842 22:33:33 -- common/autotest_common.sh@829 -- # '[' -z 82021 ']' 00:13:32.842 22:33:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.842 22:33:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:32.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.842 22:33:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.842 22:33:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:32.842 22:33:33 -- common/autotest_common.sh@10 -- # set +x 00:13:32.842 [2024-11-20 22:33:33.404046] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:32.842 [2024-11-20 22:33:33.404128] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.842 [2024-11-20 22:33:33.545249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.101 [2024-11-20 22:33:33.615385] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:33.101 [2024-11-20 22:33:33.615557] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.101 [2024-11-20 22:33:33.615574] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.101 [2024-11-20 22:33:33.615587] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.101 [2024-11-20 22:33:33.615626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.036 22:33:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:34.037 22:33:34 -- common/autotest_common.sh@862 -- # return 0 00:13:34.037 22:33:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:34.037 22:33:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:34.037 22:33:34 -- common/autotest_common.sh@10 -- # set +x 00:13:34.037 22:33:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.037 22:33:34 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:34.037 22:33:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.037 22:33:34 -- common/autotest_common.sh@10 -- # set +x 00:13:34.037 [2024-11-20 22:33:34.488071] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:34.037 22:33:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.037 22:33:34 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:34.037 22:33:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.037 22:33:34 -- common/autotest_common.sh@10 -- # set +x 00:13:34.037 22:33:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.037 22:33:34 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.037 22:33:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.037 22:33:34 -- common/autotest_common.sh@10 -- # set +x 00:13:34.037 [2024-11-20 22:33:34.504202] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.037 22:33:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.037 22:33:34 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:34.037 22:33:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.037 22:33:34 -- common/autotest_common.sh@10 -- # set +x 00:13:34.037 NULL1 00:13:34.037 22:33:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.037 22:33:34 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:34.037 22:33:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.037 22:33:34 -- common/autotest_common.sh@10 -- # set +x 00:13:34.037 22:33:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.037 22:33:34 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:34.037 22:33:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.037 22:33:34 -- common/autotest_common.sh@10 -- # set +x 00:13:34.037 22:33:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.037 22:33:34 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:34.037 [2024-11-20 22:33:34.556167] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:34.037 [2024-11-20 22:33:34.556217] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82071 ] 00:13:34.296 Attached to nqn.2016-06.io.spdk:cnode1 00:13:34.296 Namespace ID: 1 size: 1GB 00:13:34.296 fused_ordering(0) 00:13:34.296 fused_ordering(1) 00:13:34.296 fused_ordering(2) 00:13:34.296 fused_ordering(3) 00:13:34.296 fused_ordering(4) 00:13:34.296 fused_ordering(5) 00:13:34.296 fused_ordering(6) 00:13:34.296 fused_ordering(7) 00:13:34.296 fused_ordering(8) 00:13:34.296 fused_ordering(9) 00:13:34.296 fused_ordering(10) 00:13:34.296 fused_ordering(11) 00:13:34.296 fused_ordering(12) 00:13:34.296 fused_ordering(13) 00:13:34.296 fused_ordering(14) 00:13:34.296 fused_ordering(15) 00:13:34.296 fused_ordering(16) 00:13:34.296 fused_ordering(17) 00:13:34.296 fused_ordering(18) 00:13:34.296 fused_ordering(19) 00:13:34.296 fused_ordering(20) 00:13:34.296 fused_ordering(21) 00:13:34.296 fused_ordering(22) 00:13:34.296 fused_ordering(23) 00:13:34.296 fused_ordering(24) 00:13:34.296 fused_ordering(25) 00:13:34.296 fused_ordering(26) 00:13:34.296 fused_ordering(27) 00:13:34.296 fused_ordering(28) 00:13:34.296 fused_ordering(29) 00:13:34.296 fused_ordering(30) 00:13:34.296 fused_ordering(31) 00:13:34.296 fused_ordering(32) 00:13:34.296 fused_ordering(33) 00:13:34.296 fused_ordering(34) 00:13:34.296 fused_ordering(35) 00:13:34.296 fused_ordering(36) 00:13:34.296 fused_ordering(37) 00:13:34.296 fused_ordering(38) 00:13:34.296 fused_ordering(39) 00:13:34.296 fused_ordering(40) 00:13:34.296 fused_ordering(41) 00:13:34.296 fused_ordering(42) 00:13:34.296 fused_ordering(43) 00:13:34.296 fused_ordering(44) 00:13:34.296 fused_ordering(45) 00:13:34.296 fused_ordering(46) 00:13:34.296 fused_ordering(47) 00:13:34.296 fused_ordering(48) 00:13:34.296 fused_ordering(49) 00:13:34.296 fused_ordering(50) 00:13:34.296 fused_ordering(51) 00:13:34.296 fused_ordering(52) 00:13:34.296 fused_ordering(53) 00:13:34.296 fused_ordering(54) 00:13:34.296 fused_ordering(55) 00:13:34.296 fused_ordering(56) 00:13:34.296 fused_ordering(57) 00:13:34.296 fused_ordering(58) 00:13:34.296 fused_ordering(59) 00:13:34.296 fused_ordering(60) 00:13:34.296 fused_ordering(61) 00:13:34.296 fused_ordering(62) 00:13:34.296 fused_ordering(63) 00:13:34.296 fused_ordering(64) 00:13:34.296 fused_ordering(65) 00:13:34.296 fused_ordering(66) 00:13:34.296 fused_ordering(67) 00:13:34.296 fused_ordering(68) 00:13:34.296 fused_ordering(69) 00:13:34.296 fused_ordering(70) 00:13:34.296 fused_ordering(71) 00:13:34.296 fused_ordering(72) 00:13:34.296 fused_ordering(73) 00:13:34.296 fused_ordering(74) 00:13:34.296 fused_ordering(75) 00:13:34.296 fused_ordering(76) 00:13:34.296 fused_ordering(77) 00:13:34.296 fused_ordering(78) 00:13:34.296 fused_ordering(79) 00:13:34.296 fused_ordering(80) 00:13:34.296 fused_ordering(81) 00:13:34.296 fused_ordering(82) 00:13:34.296 fused_ordering(83) 00:13:34.296 fused_ordering(84) 00:13:34.296 fused_ordering(85) 00:13:34.296 fused_ordering(86) 00:13:34.296 fused_ordering(87) 00:13:34.296 fused_ordering(88) 00:13:34.296 fused_ordering(89) 00:13:34.296 fused_ordering(90) 00:13:34.296 fused_ordering(91) 00:13:34.296 fused_ordering(92) 00:13:34.296 fused_ordering(93) 00:13:34.296 fused_ordering(94) 00:13:34.296 fused_ordering(95) 00:13:34.296 fused_ordering(96) 00:13:34.296 fused_ordering(97) 00:13:34.296 fused_ordering(98) 00:13:34.296 fused_ordering(99) 00:13:34.296 fused_ordering(100) 00:13:34.296 fused_ordering(101) 00:13:34.296 fused_ordering(102) 00:13:34.296 fused_ordering(103) 00:13:34.296 fused_ordering(104) 00:13:34.296 fused_ordering(105) 00:13:34.296 fused_ordering(106) 00:13:34.296 fused_ordering(107) 00:13:34.296 fused_ordering(108) 00:13:34.296 fused_ordering(109) 00:13:34.296 fused_ordering(110) 00:13:34.296 fused_ordering(111) 00:13:34.296 fused_ordering(112) 00:13:34.296 fused_ordering(113) 00:13:34.296 fused_ordering(114) 00:13:34.296 fused_ordering(115) 00:13:34.296 fused_ordering(116) 00:13:34.296 fused_ordering(117) 00:13:34.296 fused_ordering(118) 00:13:34.296 fused_ordering(119) 00:13:34.296 fused_ordering(120) 00:13:34.296 fused_ordering(121) 00:13:34.296 fused_ordering(122) 00:13:34.296 fused_ordering(123) 00:13:34.296 fused_ordering(124) 00:13:34.296 fused_ordering(125) 00:13:34.296 fused_ordering(126) 00:13:34.296 fused_ordering(127) 00:13:34.296 fused_ordering(128) 00:13:34.296 fused_ordering(129) 00:13:34.296 fused_ordering(130) 00:13:34.296 fused_ordering(131) 00:13:34.296 fused_ordering(132) 00:13:34.296 fused_ordering(133) 00:13:34.296 fused_ordering(134) 00:13:34.296 fused_ordering(135) 00:13:34.296 fused_ordering(136) 00:13:34.296 fused_ordering(137) 00:13:34.296 fused_ordering(138) 00:13:34.296 fused_ordering(139) 00:13:34.296 fused_ordering(140) 00:13:34.296 fused_ordering(141) 00:13:34.296 fused_ordering(142) 00:13:34.296 fused_ordering(143) 00:13:34.296 fused_ordering(144) 00:13:34.296 fused_ordering(145) 00:13:34.296 fused_ordering(146) 00:13:34.296 fused_ordering(147) 00:13:34.296 fused_ordering(148) 00:13:34.296 fused_ordering(149) 00:13:34.296 fused_ordering(150) 00:13:34.296 fused_ordering(151) 00:13:34.296 fused_ordering(152) 00:13:34.296 fused_ordering(153) 00:13:34.296 fused_ordering(154) 00:13:34.296 fused_ordering(155) 00:13:34.296 fused_ordering(156) 00:13:34.296 fused_ordering(157) 00:13:34.296 fused_ordering(158) 00:13:34.296 fused_ordering(159) 00:13:34.296 fused_ordering(160) 00:13:34.296 fused_ordering(161) 00:13:34.296 fused_ordering(162) 00:13:34.296 fused_ordering(163) 00:13:34.296 fused_ordering(164) 00:13:34.296 fused_ordering(165) 00:13:34.296 fused_ordering(166) 00:13:34.296 fused_ordering(167) 00:13:34.296 fused_ordering(168) 00:13:34.296 fused_ordering(169) 00:13:34.296 fused_ordering(170) 00:13:34.296 fused_ordering(171) 00:13:34.296 fused_ordering(172) 00:13:34.296 fused_ordering(173) 00:13:34.296 fused_ordering(174) 00:13:34.296 fused_ordering(175) 00:13:34.296 fused_ordering(176) 00:13:34.296 fused_ordering(177) 00:13:34.297 fused_ordering(178) 00:13:34.297 fused_ordering(179) 00:13:34.297 fused_ordering(180) 00:13:34.297 fused_ordering(181) 00:13:34.297 fused_ordering(182) 00:13:34.297 fused_ordering(183) 00:13:34.297 fused_ordering(184) 00:13:34.297 fused_ordering(185) 00:13:34.297 fused_ordering(186) 00:13:34.297 fused_ordering(187) 00:13:34.297 fused_ordering(188) 00:13:34.297 fused_ordering(189) 00:13:34.297 fused_ordering(190) 00:13:34.297 fused_ordering(191) 00:13:34.297 fused_ordering(192) 00:13:34.297 fused_ordering(193) 00:13:34.297 fused_ordering(194) 00:13:34.297 fused_ordering(195) 00:13:34.297 fused_ordering(196) 00:13:34.297 fused_ordering(197) 00:13:34.297 fused_ordering(198) 00:13:34.297 fused_ordering(199) 00:13:34.297 fused_ordering(200) 00:13:34.297 fused_ordering(201) 00:13:34.297 fused_ordering(202) 00:13:34.297 fused_ordering(203) 00:13:34.297 fused_ordering(204) 00:13:34.297 fused_ordering(205) 00:13:34.556 fused_ordering(206) 00:13:34.556 fused_ordering(207) 00:13:34.556 fused_ordering(208) 00:13:34.556 fused_ordering(209) 00:13:34.556 fused_ordering(210) 00:13:34.556 fused_ordering(211) 00:13:34.556 fused_ordering(212) 00:13:34.556 fused_ordering(213) 00:13:34.556 fused_ordering(214) 00:13:34.556 fused_ordering(215) 00:13:34.556 fused_ordering(216) 00:13:34.556 fused_ordering(217) 00:13:34.556 fused_ordering(218) 00:13:34.556 fused_ordering(219) 00:13:34.556 fused_ordering(220) 00:13:34.556 fused_ordering(221) 00:13:34.556 fused_ordering(222) 00:13:34.556 fused_ordering(223) 00:13:34.556 fused_ordering(224) 00:13:34.556 fused_ordering(225) 00:13:34.556 fused_ordering(226) 00:13:34.556 fused_ordering(227) 00:13:34.556 fused_ordering(228) 00:13:34.556 fused_ordering(229) 00:13:34.556 fused_ordering(230) 00:13:34.556 fused_ordering(231) 00:13:34.556 fused_ordering(232) 00:13:34.556 fused_ordering(233) 00:13:34.556 fused_ordering(234) 00:13:34.556 fused_ordering(235) 00:13:34.556 fused_ordering(236) 00:13:34.556 fused_ordering(237) 00:13:34.556 fused_ordering(238) 00:13:34.556 fused_ordering(239) 00:13:34.556 fused_ordering(240) 00:13:34.556 fused_ordering(241) 00:13:34.556 fused_ordering(242) 00:13:34.556 fused_ordering(243) 00:13:34.556 fused_ordering(244) 00:13:34.556 fused_ordering(245) 00:13:34.556 fused_ordering(246) 00:13:34.556 fused_ordering(247) 00:13:34.556 fused_ordering(248) 00:13:34.556 fused_ordering(249) 00:13:34.556 fused_ordering(250) 00:13:34.556 fused_ordering(251) 00:13:34.556 fused_ordering(252) 00:13:34.556 fused_ordering(253) 00:13:34.556 fused_ordering(254) 00:13:34.556 fused_ordering(255) 00:13:34.556 fused_ordering(256) 00:13:34.556 fused_ordering(257) 00:13:34.556 fused_ordering(258) 00:13:34.556 fused_ordering(259) 00:13:34.556 fused_ordering(260) 00:13:34.556 fused_ordering(261) 00:13:34.556 fused_ordering(262) 00:13:34.556 fused_ordering(263) 00:13:34.556 fused_ordering(264) 00:13:34.556 fused_ordering(265) 00:13:34.556 fused_ordering(266) 00:13:34.556 fused_ordering(267) 00:13:34.556 fused_ordering(268) 00:13:34.556 fused_ordering(269) 00:13:34.556 fused_ordering(270) 00:13:34.556 fused_ordering(271) 00:13:34.556 fused_ordering(272) 00:13:34.556 fused_ordering(273) 00:13:34.556 fused_ordering(274) 00:13:34.556 fused_ordering(275) 00:13:34.556 fused_ordering(276) 00:13:34.556 fused_ordering(277) 00:13:34.556 fused_ordering(278) 00:13:34.556 fused_ordering(279) 00:13:34.556 fused_ordering(280) 00:13:34.556 fused_ordering(281) 00:13:34.556 fused_ordering(282) 00:13:34.556 fused_ordering(283) 00:13:34.556 fused_ordering(284) 00:13:34.556 fused_ordering(285) 00:13:34.556 fused_ordering(286) 00:13:34.556 fused_ordering(287) 00:13:34.556 fused_ordering(288) 00:13:34.556 fused_ordering(289) 00:13:34.556 fused_ordering(290) 00:13:34.556 fused_ordering(291) 00:13:34.556 fused_ordering(292) 00:13:34.556 fused_ordering(293) 00:13:34.556 fused_ordering(294) 00:13:34.556 fused_ordering(295) 00:13:34.556 fused_ordering(296) 00:13:34.556 fused_ordering(297) 00:13:34.556 fused_ordering(298) 00:13:34.556 fused_ordering(299) 00:13:34.556 fused_ordering(300) 00:13:34.556 fused_ordering(301) 00:13:34.556 fused_ordering(302) 00:13:34.556 fused_ordering(303) 00:13:34.556 fused_ordering(304) 00:13:34.556 fused_ordering(305) 00:13:34.556 fused_ordering(306) 00:13:34.556 fused_ordering(307) 00:13:34.556 fused_ordering(308) 00:13:34.556 fused_ordering(309) 00:13:34.556 fused_ordering(310) 00:13:34.556 fused_ordering(311) 00:13:34.556 fused_ordering(312) 00:13:34.556 fused_ordering(313) 00:13:34.556 fused_ordering(314) 00:13:34.556 fused_ordering(315) 00:13:34.556 fused_ordering(316) 00:13:34.556 fused_ordering(317) 00:13:34.556 fused_ordering(318) 00:13:34.556 fused_ordering(319) 00:13:34.556 fused_ordering(320) 00:13:34.556 fused_ordering(321) 00:13:34.556 fused_ordering(322) 00:13:34.556 fused_ordering(323) 00:13:34.556 fused_ordering(324) 00:13:34.556 fused_ordering(325) 00:13:34.556 fused_ordering(326) 00:13:34.556 fused_ordering(327) 00:13:34.556 fused_ordering(328) 00:13:34.556 fused_ordering(329) 00:13:34.556 fused_ordering(330) 00:13:34.556 fused_ordering(331) 00:13:34.556 fused_ordering(332) 00:13:34.556 fused_ordering(333) 00:13:34.556 fused_ordering(334) 00:13:34.556 fused_ordering(335) 00:13:34.556 fused_ordering(336) 00:13:34.556 fused_ordering(337) 00:13:34.556 fused_ordering(338) 00:13:34.556 fused_ordering(339) 00:13:34.556 fused_ordering(340) 00:13:34.556 fused_ordering(341) 00:13:34.556 fused_ordering(342) 00:13:34.556 fused_ordering(343) 00:13:34.556 fused_ordering(344) 00:13:34.556 fused_ordering(345) 00:13:34.556 fused_ordering(346) 00:13:34.556 fused_ordering(347) 00:13:34.556 fused_ordering(348) 00:13:34.556 fused_ordering(349) 00:13:34.556 fused_ordering(350) 00:13:34.556 fused_ordering(351) 00:13:34.556 fused_ordering(352) 00:13:34.556 fused_ordering(353) 00:13:34.556 fused_ordering(354) 00:13:34.556 fused_ordering(355) 00:13:34.556 fused_ordering(356) 00:13:34.556 fused_ordering(357) 00:13:34.556 fused_ordering(358) 00:13:34.556 fused_ordering(359) 00:13:34.556 fused_ordering(360) 00:13:34.556 fused_ordering(361) 00:13:34.556 fused_ordering(362) 00:13:34.556 fused_ordering(363) 00:13:34.556 fused_ordering(364) 00:13:34.556 fused_ordering(365) 00:13:34.556 fused_ordering(366) 00:13:34.556 fused_ordering(367) 00:13:34.556 fused_ordering(368) 00:13:34.556 fused_ordering(369) 00:13:34.556 fused_ordering(370) 00:13:34.556 fused_ordering(371) 00:13:34.556 fused_ordering(372) 00:13:34.556 fused_ordering(373) 00:13:34.556 fused_ordering(374) 00:13:34.556 fused_ordering(375) 00:13:34.556 fused_ordering(376) 00:13:34.556 fused_ordering(377) 00:13:34.556 fused_ordering(378) 00:13:34.556 fused_ordering(379) 00:13:34.556 fused_ordering(380) 00:13:34.556 fused_ordering(381) 00:13:34.556 fused_ordering(382) 00:13:34.556 fused_ordering(383) 00:13:34.556 fused_ordering(384) 00:13:34.556 fused_ordering(385) 00:13:34.556 fused_ordering(386) 00:13:34.556 fused_ordering(387) 00:13:34.556 fused_ordering(388) 00:13:34.556 fused_ordering(389) 00:13:34.556 fused_ordering(390) 00:13:34.556 fused_ordering(391) 00:13:34.556 fused_ordering(392) 00:13:34.556 fused_ordering(393) 00:13:34.557 fused_ordering(394) 00:13:34.557 fused_ordering(395) 00:13:34.557 fused_ordering(396) 00:13:34.557 fused_ordering(397) 00:13:34.557 fused_ordering(398) 00:13:34.557 fused_ordering(399) 00:13:34.557 fused_ordering(400) 00:13:34.557 fused_ordering(401) 00:13:34.557 fused_ordering(402) 00:13:34.557 fused_ordering(403) 00:13:34.557 fused_ordering(404) 00:13:34.557 fused_ordering(405) 00:13:34.557 fused_ordering(406) 00:13:34.557 fused_ordering(407) 00:13:34.557 fused_ordering(408) 00:13:34.557 fused_ordering(409) 00:13:34.557 fused_ordering(410) 00:13:34.815 fused_ordering(411) 00:13:34.815 fused_ordering(412) 00:13:34.815 fused_ordering(413) 00:13:34.815 fused_ordering(414) 00:13:34.815 fused_ordering(415) 00:13:34.815 fused_ordering(416) 00:13:34.815 fused_ordering(417) 00:13:34.815 fused_ordering(418) 00:13:34.815 fused_ordering(419) 00:13:34.815 fused_ordering(420) 00:13:34.815 fused_ordering(421) 00:13:34.815 fused_ordering(422) 00:13:34.815 fused_ordering(423) 00:13:34.815 fused_ordering(424) 00:13:34.815 fused_ordering(425) 00:13:34.815 fused_ordering(426) 00:13:34.815 fused_ordering(427) 00:13:34.815 fused_ordering(428) 00:13:34.815 fused_ordering(429) 00:13:34.815 fused_ordering(430) 00:13:34.815 fused_ordering(431) 00:13:34.815 fused_ordering(432) 00:13:34.815 fused_ordering(433) 00:13:34.815 fused_ordering(434) 00:13:34.815 fused_ordering(435) 00:13:34.815 fused_ordering(436) 00:13:34.815 fused_ordering(437) 00:13:34.815 fused_ordering(438) 00:13:34.815 fused_ordering(439) 00:13:34.816 fused_ordering(440) 00:13:34.816 fused_ordering(441) 00:13:34.816 fused_ordering(442) 00:13:34.816 fused_ordering(443) 00:13:34.816 fused_ordering(444) 00:13:34.816 fused_ordering(445) 00:13:34.816 fused_ordering(446) 00:13:34.816 fused_ordering(447) 00:13:34.816 fused_ordering(448) 00:13:34.816 fused_ordering(449) 00:13:34.816 fused_ordering(450) 00:13:34.816 fused_ordering(451) 00:13:34.816 fused_ordering(452) 00:13:34.816 fused_ordering(453) 00:13:34.816 fused_ordering(454) 00:13:34.816 fused_ordering(455) 00:13:34.816 fused_ordering(456) 00:13:34.816 fused_ordering(457) 00:13:34.816 fused_ordering(458) 00:13:34.816 fused_ordering(459) 00:13:34.816 fused_ordering(460) 00:13:34.816 fused_ordering(461) 00:13:34.816 fused_ordering(462) 00:13:34.816 fused_ordering(463) 00:13:34.816 fused_ordering(464) 00:13:34.816 fused_ordering(465) 00:13:34.816 fused_ordering(466) 00:13:34.816 fused_ordering(467) 00:13:34.816 fused_ordering(468) 00:13:34.816 fused_ordering(469) 00:13:34.816 fused_ordering(470) 00:13:34.816 fused_ordering(471) 00:13:34.816 fused_ordering(472) 00:13:34.816 fused_ordering(473) 00:13:34.816 fused_ordering(474) 00:13:34.816 fused_ordering(475) 00:13:34.816 fused_ordering(476) 00:13:34.816 fused_ordering(477) 00:13:34.816 fused_ordering(478) 00:13:34.816 fused_ordering(479) 00:13:34.816 fused_ordering(480) 00:13:34.816 fused_ordering(481) 00:13:34.816 fused_ordering(482) 00:13:34.816 fused_ordering(483) 00:13:34.816 fused_ordering(484) 00:13:34.816 fused_ordering(485) 00:13:34.816 fused_ordering(486) 00:13:34.816 fused_ordering(487) 00:13:34.816 fused_ordering(488) 00:13:34.816 fused_ordering(489) 00:13:34.816 fused_ordering(490) 00:13:34.816 fused_ordering(491) 00:13:34.816 fused_ordering(492) 00:13:34.816 fused_ordering(493) 00:13:34.816 fused_ordering(494) 00:13:34.816 fused_ordering(495) 00:13:34.816 fused_ordering(496) 00:13:34.816 fused_ordering(497) 00:13:34.816 fused_ordering(498) 00:13:34.816 fused_ordering(499) 00:13:34.816 fused_ordering(500) 00:13:34.816 fused_ordering(501) 00:13:34.816 fused_ordering(502) 00:13:34.816 fused_ordering(503) 00:13:34.816 fused_ordering(504) 00:13:34.816 fused_ordering(505) 00:13:34.816 fused_ordering(506) 00:13:34.816 fused_ordering(507) 00:13:34.816 fused_ordering(508) 00:13:34.816 fused_ordering(509) 00:13:34.816 fused_ordering(510) 00:13:34.816 fused_ordering(511) 00:13:34.816 fused_ordering(512) 00:13:34.816 fused_ordering(513) 00:13:34.816 fused_ordering(514) 00:13:34.816 fused_ordering(515) 00:13:34.816 fused_ordering(516) 00:13:34.816 fused_ordering(517) 00:13:34.816 fused_ordering(518) 00:13:34.816 fused_ordering(519) 00:13:34.816 fused_ordering(520) 00:13:34.816 fused_ordering(521) 00:13:34.816 fused_ordering(522) 00:13:34.816 fused_ordering(523) 00:13:34.816 fused_ordering(524) 00:13:34.816 fused_ordering(525) 00:13:34.816 fused_ordering(526) 00:13:34.816 fused_ordering(527) 00:13:34.816 fused_ordering(528) 00:13:34.816 fused_ordering(529) 00:13:34.816 fused_ordering(530) 00:13:34.816 fused_ordering(531) 00:13:34.816 fused_ordering(532) 00:13:34.816 fused_ordering(533) 00:13:34.816 fused_ordering(534) 00:13:34.816 fused_ordering(535) 00:13:34.816 fused_ordering(536) 00:13:34.816 fused_ordering(537) 00:13:34.816 fused_ordering(538) 00:13:34.816 fused_ordering(539) 00:13:34.816 fused_ordering(540) 00:13:34.816 fused_ordering(541) 00:13:34.816 fused_ordering(542) 00:13:34.816 fused_ordering(543) 00:13:34.816 fused_ordering(544) 00:13:34.816 fused_ordering(545) 00:13:34.816 fused_ordering(546) 00:13:34.816 fused_ordering(547) 00:13:34.816 fused_ordering(548) 00:13:34.816 fused_ordering(549) 00:13:34.816 fused_ordering(550) 00:13:34.816 fused_ordering(551) 00:13:34.816 fused_ordering(552) 00:13:34.816 fused_ordering(553) 00:13:34.816 fused_ordering(554) 00:13:34.816 fused_ordering(555) 00:13:34.816 fused_ordering(556) 00:13:34.816 fused_ordering(557) 00:13:34.816 fused_ordering(558) 00:13:34.816 fused_ordering(559) 00:13:34.816 fused_ordering(560) 00:13:34.816 fused_ordering(561) 00:13:34.816 fused_ordering(562) 00:13:34.816 fused_ordering(563) 00:13:34.816 fused_ordering(564) 00:13:34.816 fused_ordering(565) 00:13:34.816 fused_ordering(566) 00:13:34.816 fused_ordering(567) 00:13:34.816 fused_ordering(568) 00:13:34.816 fused_ordering(569) 00:13:34.816 fused_ordering(570) 00:13:34.816 fused_ordering(571) 00:13:34.816 fused_ordering(572) 00:13:34.816 fused_ordering(573) 00:13:34.816 fused_ordering(574) 00:13:34.816 fused_ordering(575) 00:13:34.816 fused_ordering(576) 00:13:34.816 fused_ordering(577) 00:13:34.816 fused_ordering(578) 00:13:34.816 fused_ordering(579) 00:13:34.816 fused_ordering(580) 00:13:34.816 fused_ordering(581) 00:13:34.816 fused_ordering(582) 00:13:34.816 fused_ordering(583) 00:13:34.816 fused_ordering(584) 00:13:34.816 fused_ordering(585) 00:13:34.816 fused_ordering(586) 00:13:34.816 fused_ordering(587) 00:13:34.816 fused_ordering(588) 00:13:34.816 fused_ordering(589) 00:13:34.816 fused_ordering(590) 00:13:34.816 fused_ordering(591) 00:13:34.816 fused_ordering(592) 00:13:34.816 fused_ordering(593) 00:13:34.816 fused_ordering(594) 00:13:34.816 fused_ordering(595) 00:13:34.816 fused_ordering(596) 00:13:34.816 fused_ordering(597) 00:13:34.816 fused_ordering(598) 00:13:34.816 fused_ordering(599) 00:13:34.816 fused_ordering(600) 00:13:34.816 fused_ordering(601) 00:13:34.816 fused_ordering(602) 00:13:34.816 fused_ordering(603) 00:13:34.816 fused_ordering(604) 00:13:34.816 fused_ordering(605) 00:13:34.816 fused_ordering(606) 00:13:34.816 fused_ordering(607) 00:13:34.816 fused_ordering(608) 00:13:34.816 fused_ordering(609) 00:13:34.816 fused_ordering(610) 00:13:34.816 fused_ordering(611) 00:13:34.816 fused_ordering(612) 00:13:34.816 fused_ordering(613) 00:13:34.816 fused_ordering(614) 00:13:34.816 fused_ordering(615) 00:13:35.383 fused_ordering(616) 00:13:35.383 fused_ordering(617) 00:13:35.383 fused_ordering(618) 00:13:35.383 fused_ordering(619) 00:13:35.383 fused_ordering(620) 00:13:35.383 fused_ordering(621) 00:13:35.383 fused_ordering(622) 00:13:35.383 fused_ordering(623) 00:13:35.383 fused_ordering(624) 00:13:35.383 fused_ordering(625) 00:13:35.383 fused_ordering(626) 00:13:35.383 fused_ordering(627) 00:13:35.383 fused_ordering(628) 00:13:35.383 fused_ordering(629) 00:13:35.383 fused_ordering(630) 00:13:35.383 fused_ordering(631) 00:13:35.383 fused_ordering(632) 00:13:35.383 fused_ordering(633) 00:13:35.383 fused_ordering(634) 00:13:35.383 fused_ordering(635) 00:13:35.383 fused_ordering(636) 00:13:35.383 fused_ordering(637) 00:13:35.383 fused_ordering(638) 00:13:35.383 fused_ordering(639) 00:13:35.383 fused_ordering(640) 00:13:35.383 fused_ordering(641) 00:13:35.383 fused_ordering(642) 00:13:35.383 fused_ordering(643) 00:13:35.383 fused_ordering(644) 00:13:35.383 fused_ordering(645) 00:13:35.383 fused_ordering(646) 00:13:35.383 fused_ordering(647) 00:13:35.383 fused_ordering(648) 00:13:35.383 fused_ordering(649) 00:13:35.383 fused_ordering(650) 00:13:35.383 fused_ordering(651) 00:13:35.383 fused_ordering(652) 00:13:35.383 fused_ordering(653) 00:13:35.383 fused_ordering(654) 00:13:35.383 fused_ordering(655) 00:13:35.383 fused_ordering(656) 00:13:35.383 fused_ordering(657) 00:13:35.383 fused_ordering(658) 00:13:35.383 fused_ordering(659) 00:13:35.383 fused_ordering(660) 00:13:35.383 fused_ordering(661) 00:13:35.383 fused_ordering(662) 00:13:35.383 fused_ordering(663) 00:13:35.383 fused_ordering(664) 00:13:35.383 fused_ordering(665) 00:13:35.383 fused_ordering(666) 00:13:35.383 fused_ordering(667) 00:13:35.383 fused_ordering(668) 00:13:35.383 fused_ordering(669) 00:13:35.383 fused_ordering(670) 00:13:35.383 fused_ordering(671) 00:13:35.383 fused_ordering(672) 00:13:35.383 fused_ordering(673) 00:13:35.383 fused_ordering(674) 00:13:35.383 fused_ordering(675) 00:13:35.383 fused_ordering(676) 00:13:35.383 fused_ordering(677) 00:13:35.383 fused_ordering(678) 00:13:35.383 fused_ordering(679) 00:13:35.383 fused_ordering(680) 00:13:35.383 fused_ordering(681) 00:13:35.383 fused_ordering(682) 00:13:35.383 fused_ordering(683) 00:13:35.383 fused_ordering(684) 00:13:35.383 fused_ordering(685) 00:13:35.383 fused_ordering(686) 00:13:35.383 fused_ordering(687) 00:13:35.383 fused_ordering(688) 00:13:35.383 fused_ordering(689) 00:13:35.383 fused_ordering(690) 00:13:35.383 fused_ordering(691) 00:13:35.383 fused_ordering(692) 00:13:35.383 fused_ordering(693) 00:13:35.383 fused_ordering(694) 00:13:35.383 fused_ordering(695) 00:13:35.383 fused_ordering(696) 00:13:35.383 fused_ordering(697) 00:13:35.383 fused_ordering(698) 00:13:35.383 fused_ordering(699) 00:13:35.383 fused_ordering(700) 00:13:35.383 fused_ordering(701) 00:13:35.383 fused_ordering(702) 00:13:35.383 fused_ordering(703) 00:13:35.383 fused_ordering(704) 00:13:35.383 fused_ordering(705) 00:13:35.383 fused_ordering(706) 00:13:35.383 fused_ordering(707) 00:13:35.383 fused_ordering(708) 00:13:35.383 fused_ordering(709) 00:13:35.383 fused_ordering(710) 00:13:35.383 fused_ordering(711) 00:13:35.383 fused_ordering(712) 00:13:35.383 fused_ordering(713) 00:13:35.383 fused_ordering(714) 00:13:35.383 fused_ordering(715) 00:13:35.383 fused_ordering(716) 00:13:35.383 fused_ordering(717) 00:13:35.383 fused_ordering(718) 00:13:35.383 fused_ordering(719) 00:13:35.383 fused_ordering(720) 00:13:35.383 fused_ordering(721) 00:13:35.383 fused_ordering(722) 00:13:35.383 fused_ordering(723) 00:13:35.383 fused_ordering(724) 00:13:35.383 fused_ordering(725) 00:13:35.383 fused_ordering(726) 00:13:35.383 fused_ordering(727) 00:13:35.383 fused_ordering(728) 00:13:35.383 fused_ordering(729) 00:13:35.383 fused_ordering(730) 00:13:35.383 fused_ordering(731) 00:13:35.383 fused_ordering(732) 00:13:35.383 fused_ordering(733) 00:13:35.383 fused_ordering(734) 00:13:35.383 fused_ordering(735) 00:13:35.383 fused_ordering(736) 00:13:35.383 fused_ordering(737) 00:13:35.383 fused_ordering(738) 00:13:35.383 fused_ordering(739) 00:13:35.383 fused_ordering(740) 00:13:35.383 fused_ordering(741) 00:13:35.383 fused_ordering(742) 00:13:35.383 fused_ordering(743) 00:13:35.383 fused_ordering(744) 00:13:35.383 fused_ordering(745) 00:13:35.383 fused_ordering(746) 00:13:35.383 fused_ordering(747) 00:13:35.383 fused_ordering(748) 00:13:35.383 fused_ordering(749) 00:13:35.383 fused_ordering(750) 00:13:35.383 fused_ordering(751) 00:13:35.383 fused_ordering(752) 00:13:35.383 fused_ordering(753) 00:13:35.383 fused_ordering(754) 00:13:35.383 fused_ordering(755) 00:13:35.383 fused_ordering(756) 00:13:35.383 fused_ordering(757) 00:13:35.383 fused_ordering(758) 00:13:35.383 fused_ordering(759) 00:13:35.383 fused_ordering(760) 00:13:35.383 fused_ordering(761) 00:13:35.383 fused_ordering(762) 00:13:35.383 fused_ordering(763) 00:13:35.383 fused_ordering(764) 00:13:35.383 fused_ordering(765) 00:13:35.383 fused_ordering(766) 00:13:35.383 fused_ordering(767) 00:13:35.383 fused_ordering(768) 00:13:35.383 fused_ordering(769) 00:13:35.383 fused_ordering(770) 00:13:35.383 fused_ordering(771) 00:13:35.383 fused_ordering(772) 00:13:35.383 fused_ordering(773) 00:13:35.383 fused_ordering(774) 00:13:35.383 fused_ordering(775) 00:13:35.383 fused_ordering(776) 00:13:35.383 fused_ordering(777) 00:13:35.383 fused_ordering(778) 00:13:35.383 fused_ordering(779) 00:13:35.383 fused_ordering(780) 00:13:35.383 fused_ordering(781) 00:13:35.383 fused_ordering(782) 00:13:35.383 fused_ordering(783) 00:13:35.383 fused_ordering(784) 00:13:35.383 fused_ordering(785) 00:13:35.383 fused_ordering(786) 00:13:35.383 fused_ordering(787) 00:13:35.383 fused_ordering(788) 00:13:35.383 fused_ordering(789) 00:13:35.383 fused_ordering(790) 00:13:35.383 fused_ordering(791) 00:13:35.383 fused_ordering(792) 00:13:35.383 fused_ordering(793) 00:13:35.383 fused_ordering(794) 00:13:35.383 fused_ordering(795) 00:13:35.383 fused_ordering(796) 00:13:35.383 fused_ordering(797) 00:13:35.383 fused_ordering(798) 00:13:35.383 fused_ordering(799) 00:13:35.383 fused_ordering(800) 00:13:35.383 fused_ordering(801) 00:13:35.383 fused_ordering(802) 00:13:35.383 fused_ordering(803) 00:13:35.383 fused_ordering(804) 00:13:35.383 fused_ordering(805) 00:13:35.383 fused_ordering(806) 00:13:35.383 fused_ordering(807) 00:13:35.383 fused_ordering(808) 00:13:35.383 fused_ordering(809) 00:13:35.383 fused_ordering(810) 00:13:35.383 fused_ordering(811) 00:13:35.383 fused_ordering(812) 00:13:35.383 fused_ordering(813) 00:13:35.383 fused_ordering(814) 00:13:35.383 fused_ordering(815) 00:13:35.383 fused_ordering(816) 00:13:35.383 fused_ordering(817) 00:13:35.383 fused_ordering(818) 00:13:35.383 fused_ordering(819) 00:13:35.383 fused_ordering(820) 00:13:35.642 fused_ordering(821) 00:13:35.642 fused_ordering(822) 00:13:35.642 fused_ordering(823) 00:13:35.642 fused_ordering(824) 00:13:35.642 fused_ordering(825) 00:13:35.642 fused_ordering(826) 00:13:35.642 fused_ordering(827) 00:13:35.642 fused_ordering(828) 00:13:35.642 fused_ordering(829) 00:13:35.642 fused_ordering(830) 00:13:35.642 fused_ordering(831) 00:13:35.642 fused_ordering(832) 00:13:35.642 fused_ordering(833) 00:13:35.642 fused_ordering(834) 00:13:35.642 fused_ordering(835) 00:13:35.642 fused_ordering(836) 00:13:35.642 fused_ordering(837) 00:13:35.642 fused_ordering(838) 00:13:35.642 fused_ordering(839) 00:13:35.642 fused_ordering(840) 00:13:35.642 fused_ordering(841) 00:13:35.642 fused_ordering(842) 00:13:35.642 fused_ordering(843) 00:13:35.642 fused_ordering(844) 00:13:35.642 fused_ordering(845) 00:13:35.642 fused_ordering(846) 00:13:35.642 fused_ordering(847) 00:13:35.642 fused_ordering(848) 00:13:35.642 fused_ordering(849) 00:13:35.642 fused_ordering(850) 00:13:35.642 fused_ordering(851) 00:13:35.642 fused_ordering(852) 00:13:35.642 fused_ordering(853) 00:13:35.642 fused_ordering(854) 00:13:35.642 fused_ordering(855) 00:13:35.642 fused_ordering(856) 00:13:35.642 fused_ordering(857) 00:13:35.642 fused_ordering(858) 00:13:35.642 fused_ordering(859) 00:13:35.642 fused_ordering(860) 00:13:35.642 fused_ordering(861) 00:13:35.642 fused_ordering(862) 00:13:35.642 fused_ordering(863) 00:13:35.642 fused_ordering(864) 00:13:35.642 fused_ordering(865) 00:13:35.642 fused_ordering(866) 00:13:35.642 fused_ordering(867) 00:13:35.642 fused_ordering(868) 00:13:35.642 fused_ordering(869) 00:13:35.642 fused_ordering(870) 00:13:35.642 fused_ordering(871) 00:13:35.642 fused_ordering(872) 00:13:35.642 fused_ordering(873) 00:13:35.642 fused_ordering(874) 00:13:35.642 fused_ordering(875) 00:13:35.642 fused_ordering(876) 00:13:35.642 fused_ordering(877) 00:13:35.642 fused_ordering(878) 00:13:35.642 fused_ordering(879) 00:13:35.642 fused_ordering(880) 00:13:35.642 fused_ordering(881) 00:13:35.642 fused_ordering(882) 00:13:35.642 fused_ordering(883) 00:13:35.642 fused_ordering(884) 00:13:35.642 fused_ordering(885) 00:13:35.642 fused_ordering(886) 00:13:35.642 fused_ordering(887) 00:13:35.642 fused_ordering(888) 00:13:35.642 fused_ordering(889) 00:13:35.642 fused_ordering(890) 00:13:35.642 fused_ordering(891) 00:13:35.642 fused_ordering(892) 00:13:35.642 fused_ordering(893) 00:13:35.642 fused_ordering(894) 00:13:35.642 fused_ordering(895) 00:13:35.642 fused_ordering(896) 00:13:35.642 fused_ordering(897) 00:13:35.642 fused_ordering(898) 00:13:35.642 fused_ordering(899) 00:13:35.642 fused_ordering(900) 00:13:35.642 fused_ordering(901) 00:13:35.642 fused_ordering(902) 00:13:35.642 fused_ordering(903) 00:13:35.642 fused_ordering(904) 00:13:35.642 fused_ordering(905) 00:13:35.642 fused_ordering(906) 00:13:35.642 fused_ordering(907) 00:13:35.642 fused_ordering(908) 00:13:35.642 fused_ordering(909) 00:13:35.642 fused_ordering(910) 00:13:35.642 fused_ordering(911) 00:13:35.642 fused_ordering(912) 00:13:35.642 fused_ordering(913) 00:13:35.642 fused_ordering(914) 00:13:35.642 fused_ordering(915) 00:13:35.642 fused_ordering(916) 00:13:35.642 fused_ordering(917) 00:13:35.642 fused_ordering(918) 00:13:35.642 fused_ordering(919) 00:13:35.642 fused_ordering(920) 00:13:35.642 fused_ordering(921) 00:13:35.642 fused_ordering(922) 00:13:35.642 fused_ordering(923) 00:13:35.642 fused_ordering(924) 00:13:35.642 fused_ordering(925) 00:13:35.642 fused_ordering(926) 00:13:35.642 fused_ordering(927) 00:13:35.642 fused_ordering(928) 00:13:35.642 fused_ordering(929) 00:13:35.642 fused_ordering(930) 00:13:35.643 fused_ordering(931) 00:13:35.643 fused_ordering(932) 00:13:35.643 fused_ordering(933) 00:13:35.643 fused_ordering(934) 00:13:35.643 fused_ordering(935) 00:13:35.643 fused_ordering(936) 00:13:35.643 fused_ordering(937) 00:13:35.643 fused_ordering(938) 00:13:35.643 fused_ordering(939) 00:13:35.643 fused_ordering(940) 00:13:35.643 fused_ordering(941) 00:13:35.643 fused_ordering(942) 00:13:35.643 fused_ordering(943) 00:13:35.643 fused_ordering(944) 00:13:35.643 fused_ordering(945) 00:13:35.643 fused_ordering(946) 00:13:35.643 fused_ordering(947) 00:13:35.643 fused_ordering(948) 00:13:35.643 fused_ordering(949) 00:13:35.643 fused_ordering(950) 00:13:35.643 fused_ordering(951) 00:13:35.643 fused_ordering(952) 00:13:35.643 fused_ordering(953) 00:13:35.643 fused_ordering(954) 00:13:35.643 fused_ordering(955) 00:13:35.643 fused_ordering(956) 00:13:35.643 fused_ordering(957) 00:13:35.643 fused_ordering(958) 00:13:35.643 fused_ordering(959) 00:13:35.643 fused_ordering(960) 00:13:35.643 fused_ordering(961) 00:13:35.643 fused_ordering(962) 00:13:35.643 fused_ordering(963) 00:13:35.643 fused_ordering(964) 00:13:35.643 fused_ordering(965) 00:13:35.643 fused_ordering(966) 00:13:35.643 fused_ordering(967) 00:13:35.643 fused_ordering(968) 00:13:35.643 fused_ordering(969) 00:13:35.643 fused_ordering(970) 00:13:35.643 fused_ordering(971) 00:13:35.643 fused_ordering(972) 00:13:35.643 fused_ordering(973) 00:13:35.643 fused_ordering(974) 00:13:35.643 fused_ordering(975) 00:13:35.643 fused_ordering(976) 00:13:35.643 fused_ordering(977) 00:13:35.643 fused_ordering(978) 00:13:35.643 fused_ordering(979) 00:13:35.643 fused_ordering(980) 00:13:35.643 fused_ordering(981) 00:13:35.643 fused_ordering(982) 00:13:35.643 fused_ordering(983) 00:13:35.643 fused_ordering(984) 00:13:35.643 fused_ordering(985) 00:13:35.643 fused_ordering(986) 00:13:35.643 fused_ordering(987) 00:13:35.643 fused_ordering(988) 00:13:35.643 fused_ordering(989) 00:13:35.643 fused_ordering(990) 00:13:35.643 fused_ordering(991) 00:13:35.643 fused_ordering(992) 00:13:35.643 fused_ordering(993) 00:13:35.643 fused_ordering(994) 00:13:35.643 fused_ordering(995) 00:13:35.643 fused_ordering(996) 00:13:35.643 fused_ordering(997) 00:13:35.643 fused_ordering(998) 00:13:35.643 fused_ordering(999) 00:13:35.643 fused_ordering(1000) 00:13:35.643 fused_ordering(1001) 00:13:35.643 fused_ordering(1002) 00:13:35.643 fused_ordering(1003) 00:13:35.643 fused_ordering(1004) 00:13:35.643 fused_ordering(1005) 00:13:35.643 fused_ordering(1006) 00:13:35.643 fused_ordering(1007) 00:13:35.643 fused_ordering(1008) 00:13:35.643 fused_ordering(1009) 00:13:35.643 fused_ordering(1010) 00:13:35.643 fused_ordering(1011) 00:13:35.643 fused_ordering(1012) 00:13:35.643 fused_ordering(1013) 00:13:35.643 fused_ordering(1014) 00:13:35.643 fused_ordering(1015) 00:13:35.643 fused_ordering(1016) 00:13:35.643 fused_ordering(1017) 00:13:35.643 fused_ordering(1018) 00:13:35.643 fused_ordering(1019) 00:13:35.643 fused_ordering(1020) 00:13:35.643 fused_ordering(1021) 00:13:35.643 fused_ordering(1022) 00:13:35.643 fused_ordering(1023) 00:13:35.643 22:33:36 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:35.643 22:33:36 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:35.643 22:33:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:35.643 22:33:36 -- nvmf/common.sh@116 -- # sync 00:13:35.902 22:33:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:35.902 22:33:36 -- nvmf/common.sh@119 -- # set +e 00:13:35.902 22:33:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:35.902 22:33:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:35.902 rmmod nvme_tcp 00:13:35.902 rmmod nvme_fabrics 00:13:35.902 rmmod nvme_keyring 00:13:35.902 22:33:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:35.902 22:33:36 -- nvmf/common.sh@123 -- # set -e 00:13:35.902 22:33:36 -- nvmf/common.sh@124 -- # return 0 00:13:35.902 22:33:36 -- nvmf/common.sh@477 -- # '[' -n 82021 ']' 00:13:35.902 22:33:36 -- nvmf/common.sh@478 -- # killprocess 82021 00:13:35.902 22:33:36 -- common/autotest_common.sh@936 -- # '[' -z 82021 ']' 00:13:35.902 22:33:36 -- common/autotest_common.sh@940 -- # kill -0 82021 00:13:35.902 22:33:36 -- common/autotest_common.sh@941 -- # uname 00:13:35.902 22:33:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:35.902 22:33:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82021 00:13:35.902 22:33:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:35.902 22:33:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:35.902 killing process with pid 82021 00:13:35.902 22:33:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82021' 00:13:35.902 22:33:36 -- common/autotest_common.sh@955 -- # kill 82021 00:13:35.902 22:33:36 -- common/autotest_common.sh@960 -- # wait 82021 00:13:36.161 22:33:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:36.161 22:33:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:36.161 22:33:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:36.161 22:33:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:36.161 22:33:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:36.161 22:33:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.161 22:33:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.161 22:33:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.161 22:33:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:36.161 00:13:36.161 real 0m3.893s 00:13:36.161 user 0m4.436s 00:13:36.161 sys 0m1.427s 00:13:36.161 22:33:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:36.161 ************************************ 00:13:36.161 END TEST nvmf_fused_ordering 00:13:36.161 ************************************ 00:13:36.161 22:33:36 -- common/autotest_common.sh@10 -- # set +x 00:13:36.161 22:33:36 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:36.161 22:33:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:36.161 22:33:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:36.161 22:33:36 -- common/autotest_common.sh@10 -- # set +x 00:13:36.161 ************************************ 00:13:36.161 START TEST nvmf_delete_subsystem 00:13:36.161 ************************************ 00:13:36.161 22:33:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:36.161 * Looking for test storage... 00:13:36.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:36.161 22:33:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:36.161 22:33:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:36.161 22:33:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:36.421 22:33:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:36.421 22:33:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:36.421 22:33:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:36.421 22:33:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:36.421 22:33:36 -- scripts/common.sh@335 -- # IFS=.-: 00:13:36.421 22:33:36 -- scripts/common.sh@335 -- # read -ra ver1 00:13:36.421 22:33:36 -- scripts/common.sh@336 -- # IFS=.-: 00:13:36.421 22:33:36 -- scripts/common.sh@336 -- # read -ra ver2 00:13:36.421 22:33:36 -- scripts/common.sh@337 -- # local 'op=<' 00:13:36.421 22:33:36 -- scripts/common.sh@339 -- # ver1_l=2 00:13:36.421 22:33:36 -- scripts/common.sh@340 -- # ver2_l=1 00:13:36.421 22:33:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:36.421 22:33:36 -- scripts/common.sh@343 -- # case "$op" in 00:13:36.421 22:33:36 -- scripts/common.sh@344 -- # : 1 00:13:36.421 22:33:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:36.421 22:33:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:36.421 22:33:36 -- scripts/common.sh@364 -- # decimal 1 00:13:36.421 22:33:36 -- scripts/common.sh@352 -- # local d=1 00:13:36.421 22:33:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:36.421 22:33:36 -- scripts/common.sh@354 -- # echo 1 00:13:36.421 22:33:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:36.421 22:33:36 -- scripts/common.sh@365 -- # decimal 2 00:13:36.421 22:33:36 -- scripts/common.sh@352 -- # local d=2 00:13:36.421 22:33:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:36.421 22:33:36 -- scripts/common.sh@354 -- # echo 2 00:13:36.421 22:33:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:36.421 22:33:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:36.421 22:33:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:36.421 22:33:36 -- scripts/common.sh@367 -- # return 0 00:13:36.421 22:33:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:36.421 22:33:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:36.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.421 --rc genhtml_branch_coverage=1 00:13:36.421 --rc genhtml_function_coverage=1 00:13:36.421 --rc genhtml_legend=1 00:13:36.421 --rc geninfo_all_blocks=1 00:13:36.421 --rc geninfo_unexecuted_blocks=1 00:13:36.421 00:13:36.421 ' 00:13:36.421 22:33:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:36.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.421 --rc genhtml_branch_coverage=1 00:13:36.421 --rc genhtml_function_coverage=1 00:13:36.421 --rc genhtml_legend=1 00:13:36.421 --rc geninfo_all_blocks=1 00:13:36.421 --rc geninfo_unexecuted_blocks=1 00:13:36.421 00:13:36.421 ' 00:13:36.421 22:33:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:36.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.421 --rc genhtml_branch_coverage=1 00:13:36.421 --rc genhtml_function_coverage=1 00:13:36.421 --rc genhtml_legend=1 00:13:36.421 --rc geninfo_all_blocks=1 00:13:36.421 --rc geninfo_unexecuted_blocks=1 00:13:36.421 00:13:36.421 ' 00:13:36.421 22:33:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:36.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.421 --rc genhtml_branch_coverage=1 00:13:36.421 --rc genhtml_function_coverage=1 00:13:36.421 --rc genhtml_legend=1 00:13:36.421 --rc geninfo_all_blocks=1 00:13:36.421 --rc geninfo_unexecuted_blocks=1 00:13:36.421 00:13:36.421 ' 00:13:36.421 22:33:36 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:36.421 22:33:36 -- nvmf/common.sh@7 -- # uname -s 00:13:36.421 22:33:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.421 22:33:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.421 22:33:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.421 22:33:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.421 22:33:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.422 22:33:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.422 22:33:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.422 22:33:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.422 22:33:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.422 22:33:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.422 22:33:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:13:36.422 22:33:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:13:36.422 22:33:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.422 22:33:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.422 22:33:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:36.422 22:33:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:36.422 22:33:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.422 22:33:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.422 22:33:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.422 22:33:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.422 22:33:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.422 22:33:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.422 22:33:36 -- paths/export.sh@5 -- # export PATH 00:13:36.422 22:33:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.422 22:33:36 -- nvmf/common.sh@46 -- # : 0 00:13:36.422 22:33:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:36.422 22:33:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:36.422 22:33:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:36.422 22:33:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.422 22:33:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.422 22:33:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:36.422 22:33:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:36.422 22:33:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:36.422 22:33:36 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:36.422 22:33:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:36.422 22:33:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.422 22:33:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:36.422 22:33:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:36.422 22:33:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:36.422 22:33:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.422 22:33:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.422 22:33:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.422 22:33:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:36.422 22:33:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:36.422 22:33:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:36.422 22:33:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:36.422 22:33:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:36.422 22:33:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:36.422 22:33:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:36.422 22:33:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:36.422 22:33:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:36.422 22:33:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:36.422 22:33:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:36.422 22:33:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:36.422 22:33:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:36.422 22:33:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:36.422 22:33:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:36.422 22:33:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:36.422 22:33:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:36.422 22:33:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:36.422 22:33:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:36.422 22:33:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:36.422 Cannot find device "nvmf_tgt_br" 00:13:36.422 22:33:37 -- nvmf/common.sh@154 -- # true 00:13:36.422 22:33:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:36.422 Cannot find device "nvmf_tgt_br2" 00:13:36.422 22:33:37 -- nvmf/common.sh@155 -- # true 00:13:36.422 22:33:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:36.422 22:33:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:36.422 Cannot find device "nvmf_tgt_br" 00:13:36.422 22:33:37 -- nvmf/common.sh@157 -- # true 00:13:36.422 22:33:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:36.422 Cannot find device "nvmf_tgt_br2" 00:13:36.422 22:33:37 -- nvmf/common.sh@158 -- # true 00:13:36.422 22:33:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:36.422 22:33:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:36.422 22:33:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:36.422 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:36.422 22:33:37 -- nvmf/common.sh@161 -- # true 00:13:36.422 22:33:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:36.422 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:36.422 22:33:37 -- nvmf/common.sh@162 -- # true 00:13:36.422 22:33:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:36.422 22:33:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:36.422 22:33:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:36.422 22:33:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:36.422 22:33:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:36.682 22:33:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:36.682 22:33:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:36.682 22:33:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:36.682 22:33:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:36.682 22:33:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:36.682 22:33:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:36.682 22:33:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:36.682 22:33:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:36.682 22:33:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:36.682 22:33:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:36.682 22:33:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:36.682 22:33:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:36.682 22:33:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:36.682 22:33:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:36.682 22:33:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:36.682 22:33:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:36.682 22:33:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:36.682 22:33:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:36.682 22:33:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:36.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:36.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:13:36.682 00:13:36.682 --- 10.0.0.2 ping statistics --- 00:13:36.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.682 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:13:36.682 22:33:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:36.682 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:36.682 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:13:36.682 00:13:36.682 --- 10.0.0.3 ping statistics --- 00:13:36.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.682 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:13:36.682 22:33:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:36.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:36.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:13:36.682 00:13:36.682 --- 10.0.0.1 ping statistics --- 00:13:36.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.682 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:13:36.682 22:33:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:36.682 22:33:37 -- nvmf/common.sh@421 -- # return 0 00:13:36.682 22:33:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:36.682 22:33:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:36.682 22:33:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:36.682 22:33:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:36.682 22:33:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:36.682 22:33:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:36.682 22:33:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:36.682 22:33:37 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:36.682 22:33:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:36.682 22:33:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:36.682 22:33:37 -- common/autotest_common.sh@10 -- # set +x 00:13:36.682 22:33:37 -- nvmf/common.sh@469 -- # nvmfpid=82286 00:13:36.682 22:33:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:36.682 22:33:37 -- nvmf/common.sh@470 -- # waitforlisten 82286 00:13:36.682 22:33:37 -- common/autotest_common.sh@829 -- # '[' -z 82286 ']' 00:13:36.682 22:33:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.682 22:33:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:36.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.682 22:33:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.682 22:33:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:36.682 22:33:37 -- common/autotest_common.sh@10 -- # set +x 00:13:36.682 [2024-11-20 22:33:37.360635] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:36.682 [2024-11-20 22:33:37.360756] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.941 [2024-11-20 22:33:37.500174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:36.941 [2024-11-20 22:33:37.578450] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:36.941 [2024-11-20 22:33:37.578652] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.941 [2024-11-20 22:33:37.578670] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.941 [2024-11-20 22:33:37.578682] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.941 [2024-11-20 22:33:37.578882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.941 [2024-11-20 22:33:37.578929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.876 22:33:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:37.876 22:33:38 -- common/autotest_common.sh@862 -- # return 0 00:13:37.876 22:33:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:37.876 22:33:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:37.876 22:33:38 -- common/autotest_common.sh@10 -- # set +x 00:13:37.876 22:33:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.876 22:33:38 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:37.876 22:33:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.876 22:33:38 -- common/autotest_common.sh@10 -- # set +x 00:13:37.876 [2024-11-20 22:33:38.355865] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.876 22:33:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.876 22:33:38 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:37.876 22:33:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.876 22:33:38 -- common/autotest_common.sh@10 -- # set +x 00:13:37.876 22:33:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.876 22:33:38 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:37.876 22:33:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.876 22:33:38 -- common/autotest_common.sh@10 -- # set +x 00:13:37.876 [2024-11-20 22:33:38.372066] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.876 22:33:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.876 22:33:38 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:37.876 22:33:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.876 22:33:38 -- common/autotest_common.sh@10 -- # set +x 00:13:37.876 NULL1 00:13:37.876 22:33:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.877 22:33:38 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:37.877 22:33:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.877 22:33:38 -- common/autotest_common.sh@10 -- # set +x 00:13:37.877 Delay0 00:13:37.877 22:33:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.877 22:33:38 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.877 22:33:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.877 22:33:38 -- common/autotest_common.sh@10 -- # set +x 00:13:37.877 22:33:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.877 22:33:38 -- target/delete_subsystem.sh@28 -- # perf_pid=82337 00:13:37.877 22:33:38 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:37.877 22:33:38 -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:37.877 [2024-11-20 22:33:38.556483] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:39.775 22:33:40 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.775 22:33:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.775 22:33:40 -- common/autotest_common.sh@10 -- # set +x 00:13:40.033 Write completed with error (sct=0, sc=8) 00:13:40.033 Read completed with error (sct=0, sc=8) 00:13:40.033 Read completed with error (sct=0, sc=8) 00:13:40.033 starting I/O failed: -6 00:13:40.033 Read completed with error (sct=0, sc=8) 00:13:40.033 Read completed with error (sct=0, sc=8) 00:13:40.033 Write completed with error (sct=0, sc=8) 00:13:40.033 Write completed with error (sct=0, sc=8) 00:13:40.033 starting I/O failed: -6 00:13:40.033 Read completed with error (sct=0, sc=8) 00:13:40.033 Read completed with error (sct=0, sc=8) 00:13:40.033 Read completed with error (sct=0, sc=8) 00:13:40.033 Read completed with error (sct=0, sc=8) 00:13:40.033 starting I/O failed: -6 00:13:40.033 Read completed with error (sct=0, sc=8) 00:13:40.033 Read completed with error (sct=0, sc=8) 00:13:40.033 Read completed with error (sct=0, sc=8) 00:13:40.033 Read completed with error (sct=0, sc=8) 00:13:40.033 starting I/O failed: -6 00:13:40.033 Write completed with error (sct=0, sc=8) 00:13:40.033 Read completed with error (sct=0, sc=8) 00:13:40.033 Read completed with error (sct=0, sc=8) 00:13:40.033 Read completed with error (sct=0, sc=8) 00:13:40.033 starting I/O failed: -6 00:13:40.033 Read completed with error (sct=0, sc=8) 00:13:40.033 Read completed with error (sct=0, sc=8) 00:13:40.033 Write completed with error (sct=0, sc=8) 00:13:40.033 Write completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 [2024-11-20 22:33:40.596687] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44610 is same with the state(5) to be set 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 starting I/O failed: -6 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Read completed with error (sct=0, sc=8) 00:13:40.034 Write completed with error (sct=0, sc=8) 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.035 Write completed with error (sct=0, sc=8) 00:13:40.035 starting I/O failed: -6 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.035 Write completed with error (sct=0, sc=8) 00:13:40.035 Write completed with error (sct=0, sc=8) 00:13:40.035 starting I/O failed: -6 00:13:40.035 Write completed with error (sct=0, sc=8) 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.035 starting I/O failed: -6 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.035 Write completed with error (sct=0, sc=8) 00:13:40.035 starting I/O failed: -6 00:13:40.035 Write completed with error (sct=0, sc=8) 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.035 starting I/O failed: -6 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.035 Write completed with error (sct=0, sc=8) 00:13:40.035 Write completed with error (sct=0, sc=8) 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.035 Write completed with error (sct=0, sc=8) 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.035 Write completed with error (sct=0, sc=8) 00:13:40.035 Write completed with error (sct=0, sc=8) 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.035 Write completed with error (sct=0, sc=8) 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.035 Write completed with error (sct=0, sc=8) 00:13:40.035 Write completed with error (sct=0, sc=8) 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.035 Write completed with error (sct=0, sc=8) 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.035 Write completed with error (sct=0, sc=8) 00:13:40.035 Read completed with error (sct=0, sc=8) 00:13:40.970 [2024-11-20 22:33:41.569892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b07040 is same with the state(5) to be set 00:13:40.970 Read completed with error (sct=0, sc=8) 00:13:40.970 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 [2024-11-20 22:33:41.595988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b44360 is same with the state(5) to be set 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 [2024-11-20 22:33:41.596576] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b448c0 is same with the state(5) to be set 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 [2024-11-20 22:33:41.597035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f688000c600 is same with the state(5) to be set 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Read completed with error (sct=0, sc=8) 00:13:40.971 Write completed with error (sct=0, sc=8) 00:13:40.971 [2024-11-20 22:33:41.597256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f688000bf20 is same with the state(5) to be set 00:13:40.971 [2024-11-20 22:33:41.598466] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b07040 (9): Bad file descriptor 00:13:40.971 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:40.971 22:33:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.971 22:33:41 -- target/delete_subsystem.sh@34 -- # delay=0 00:13:40.971 22:33:41 -- target/delete_subsystem.sh@35 -- # kill -0 82337 00:13:40.971 22:33:41 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:40.971 Initializing NVMe Controllers 00:13:40.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:40.971 Controller IO queue size 128, less than required. 00:13:40.971 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:40.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:40.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:40.971 Initialization complete. Launching workers. 00:13:40.971 ======================================================== 00:13:40.971 Latency(us) 00:13:40.971 Device Information : IOPS MiB/s Average min max 00:13:40.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.96 0.08 887121.40 1249.94 1015615.20 00:13:40.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 191.31 0.09 892086.56 560.97 1017006.29 00:13:40.971 ======================================================== 00:13:40.971 Total : 365.27 0.18 889721.88 560.97 1017006.29 00:13:40.971 00:13:41.540 22:33:42 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:41.540 22:33:42 -- target/delete_subsystem.sh@35 -- # kill -0 82337 00:13:41.540 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (82337) - No such process 00:13:41.540 22:33:42 -- target/delete_subsystem.sh@45 -- # NOT wait 82337 00:13:41.540 22:33:42 -- common/autotest_common.sh@650 -- # local es=0 00:13:41.540 22:33:42 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 82337 00:13:41.540 22:33:42 -- common/autotest_common.sh@638 -- # local arg=wait 00:13:41.540 22:33:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:41.540 22:33:42 -- common/autotest_common.sh@642 -- # type -t wait 00:13:41.540 22:33:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:41.540 22:33:42 -- common/autotest_common.sh@653 -- # wait 82337 00:13:41.540 22:33:42 -- common/autotest_common.sh@653 -- # es=1 00:13:41.540 22:33:42 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:41.540 22:33:42 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:41.540 22:33:42 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:41.540 22:33:42 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:41.540 22:33:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.540 22:33:42 -- common/autotest_common.sh@10 -- # set +x 00:13:41.540 22:33:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.540 22:33:42 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.540 22:33:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.540 22:33:42 -- common/autotest_common.sh@10 -- # set +x 00:13:41.540 [2024-11-20 22:33:42.129713] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.540 22:33:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.540 22:33:42 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.540 22:33:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.540 22:33:42 -- common/autotest_common.sh@10 -- # set +x 00:13:41.540 22:33:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.540 22:33:42 -- target/delete_subsystem.sh@54 -- # perf_pid=82383 00:13:41.540 22:33:42 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:41.540 22:33:42 -- target/delete_subsystem.sh@56 -- # delay=0 00:13:41.540 22:33:42 -- target/delete_subsystem.sh@57 -- # kill -0 82383 00:13:41.540 22:33:42 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:41.799 [2024-11-20 22:33:42.304763] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:42.056 22:33:42 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:42.056 22:33:42 -- target/delete_subsystem.sh@57 -- # kill -0 82383 00:13:42.057 22:33:42 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:42.622 22:33:43 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:42.622 22:33:43 -- target/delete_subsystem.sh@57 -- # kill -0 82383 00:13:42.622 22:33:43 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:43.188 22:33:43 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:43.188 22:33:43 -- target/delete_subsystem.sh@57 -- # kill -0 82383 00:13:43.188 22:33:43 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:43.446 22:33:44 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:43.446 22:33:44 -- target/delete_subsystem.sh@57 -- # kill -0 82383 00:13:43.446 22:33:44 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:44.012 22:33:44 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:44.012 22:33:44 -- target/delete_subsystem.sh@57 -- # kill -0 82383 00:13:44.012 22:33:44 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:44.577 22:33:45 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:44.577 22:33:45 -- target/delete_subsystem.sh@57 -- # kill -0 82383 00:13:44.577 22:33:45 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:44.835 Initializing NVMe Controllers 00:13:44.835 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:44.835 Controller IO queue size 128, less than required. 00:13:44.835 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:44.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:44.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:44.835 Initialization complete. Launching workers. 00:13:44.835 ======================================================== 00:13:44.835 Latency(us) 00:13:44.836 Device Information : IOPS MiB/s Average min max 00:13:44.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003943.40 1000154.21 1013744.41 00:13:44.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006133.22 1000166.32 1043363.43 00:13:44.836 ======================================================== 00:13:44.836 Total : 256.00 0.12 1005038.31 1000154.21 1043363.43 00:13:44.836 00:13:45.095 22:33:45 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:45.095 22:33:45 -- target/delete_subsystem.sh@57 -- # kill -0 82383 00:13:45.095 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (82383) - No such process 00:13:45.095 22:33:45 -- target/delete_subsystem.sh@67 -- # wait 82383 00:13:45.095 22:33:45 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:45.095 22:33:45 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:45.095 22:33:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:45.095 22:33:45 -- nvmf/common.sh@116 -- # sync 00:13:45.095 22:33:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:45.095 22:33:45 -- nvmf/common.sh@119 -- # set +e 00:13:45.095 22:33:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:45.095 22:33:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:45.095 rmmod nvme_tcp 00:13:45.095 rmmod nvme_fabrics 00:13:45.095 rmmod nvme_keyring 00:13:45.095 22:33:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:45.095 22:33:45 -- nvmf/common.sh@123 -- # set -e 00:13:45.095 22:33:45 -- nvmf/common.sh@124 -- # return 0 00:13:45.095 22:33:45 -- nvmf/common.sh@477 -- # '[' -n 82286 ']' 00:13:45.095 22:33:45 -- nvmf/common.sh@478 -- # killprocess 82286 00:13:45.095 22:33:45 -- common/autotest_common.sh@936 -- # '[' -z 82286 ']' 00:13:45.095 22:33:45 -- common/autotest_common.sh@940 -- # kill -0 82286 00:13:45.095 22:33:45 -- common/autotest_common.sh@941 -- # uname 00:13:45.095 22:33:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:45.095 22:33:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82286 00:13:45.354 killing process with pid 82286 00:13:45.354 22:33:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:45.354 22:33:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:45.354 22:33:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82286' 00:13:45.354 22:33:45 -- common/autotest_common.sh@955 -- # kill 82286 00:13:45.354 22:33:45 -- common/autotest_common.sh@960 -- # wait 82286 00:13:45.612 22:33:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:45.612 22:33:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:45.612 22:33:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:45.612 22:33:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:45.612 22:33:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:45.612 22:33:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.612 22:33:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.612 22:33:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.612 22:33:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:45.612 00:13:45.612 real 0m9.367s 00:13:45.612 user 0m29.141s 00:13:45.612 sys 0m1.256s 00:13:45.612 22:33:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:45.612 22:33:46 -- common/autotest_common.sh@10 -- # set +x 00:13:45.612 ************************************ 00:13:45.612 END TEST nvmf_delete_subsystem 00:13:45.612 ************************************ 00:13:45.612 22:33:46 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:13:45.612 22:33:46 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:13:45.612 22:33:46 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:45.612 22:33:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:45.612 22:33:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:45.613 22:33:46 -- common/autotest_common.sh@10 -- # set +x 00:13:45.613 ************************************ 00:13:45.613 START TEST nvmf_host_management 00:13:45.613 ************************************ 00:13:45.613 22:33:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:45.613 * Looking for test storage... 00:13:45.613 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:45.613 22:33:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:45.613 22:33:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:45.613 22:33:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:45.872 22:33:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:45.872 22:33:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:45.872 22:33:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:45.872 22:33:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:45.872 22:33:46 -- scripts/common.sh@335 -- # IFS=.-: 00:13:45.872 22:33:46 -- scripts/common.sh@335 -- # read -ra ver1 00:13:45.872 22:33:46 -- scripts/common.sh@336 -- # IFS=.-: 00:13:45.872 22:33:46 -- scripts/common.sh@336 -- # read -ra ver2 00:13:45.872 22:33:46 -- scripts/common.sh@337 -- # local 'op=<' 00:13:45.872 22:33:46 -- scripts/common.sh@339 -- # ver1_l=2 00:13:45.872 22:33:46 -- scripts/common.sh@340 -- # ver2_l=1 00:13:45.872 22:33:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:45.872 22:33:46 -- scripts/common.sh@343 -- # case "$op" in 00:13:45.872 22:33:46 -- scripts/common.sh@344 -- # : 1 00:13:45.872 22:33:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:45.872 22:33:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:45.872 22:33:46 -- scripts/common.sh@364 -- # decimal 1 00:13:45.872 22:33:46 -- scripts/common.sh@352 -- # local d=1 00:13:45.872 22:33:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:45.872 22:33:46 -- scripts/common.sh@354 -- # echo 1 00:13:45.872 22:33:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:45.872 22:33:46 -- scripts/common.sh@365 -- # decimal 2 00:13:45.872 22:33:46 -- scripts/common.sh@352 -- # local d=2 00:13:45.872 22:33:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:45.872 22:33:46 -- scripts/common.sh@354 -- # echo 2 00:13:45.872 22:33:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:45.872 22:33:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:45.872 22:33:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:45.872 22:33:46 -- scripts/common.sh@367 -- # return 0 00:13:45.872 22:33:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:45.872 22:33:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:45.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.872 --rc genhtml_branch_coverage=1 00:13:45.872 --rc genhtml_function_coverage=1 00:13:45.872 --rc genhtml_legend=1 00:13:45.872 --rc geninfo_all_blocks=1 00:13:45.872 --rc geninfo_unexecuted_blocks=1 00:13:45.872 00:13:45.872 ' 00:13:45.872 22:33:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:45.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.872 --rc genhtml_branch_coverage=1 00:13:45.872 --rc genhtml_function_coverage=1 00:13:45.872 --rc genhtml_legend=1 00:13:45.872 --rc geninfo_all_blocks=1 00:13:45.872 --rc geninfo_unexecuted_blocks=1 00:13:45.872 00:13:45.872 ' 00:13:45.872 22:33:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:45.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.872 --rc genhtml_branch_coverage=1 00:13:45.872 --rc genhtml_function_coverage=1 00:13:45.872 --rc genhtml_legend=1 00:13:45.872 --rc geninfo_all_blocks=1 00:13:45.872 --rc geninfo_unexecuted_blocks=1 00:13:45.872 00:13:45.872 ' 00:13:45.872 22:33:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:45.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.872 --rc genhtml_branch_coverage=1 00:13:45.872 --rc genhtml_function_coverage=1 00:13:45.872 --rc genhtml_legend=1 00:13:45.872 --rc geninfo_all_blocks=1 00:13:45.872 --rc geninfo_unexecuted_blocks=1 00:13:45.872 00:13:45.872 ' 00:13:45.872 22:33:46 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:45.872 22:33:46 -- nvmf/common.sh@7 -- # uname -s 00:13:45.872 22:33:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.872 22:33:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.872 22:33:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.872 22:33:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.872 22:33:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.872 22:33:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.872 22:33:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.872 22:33:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.872 22:33:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.872 22:33:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.872 22:33:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:13:45.872 22:33:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:13:45.872 22:33:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.872 22:33:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.872 22:33:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:45.872 22:33:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:45.872 22:33:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.872 22:33:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.872 22:33:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.872 22:33:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.872 22:33:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.872 22:33:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.872 22:33:46 -- paths/export.sh@5 -- # export PATH 00:13:45.872 22:33:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.872 22:33:46 -- nvmf/common.sh@46 -- # : 0 00:13:45.872 22:33:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:45.872 22:33:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:45.873 22:33:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:45.873 22:33:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.873 22:33:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.873 22:33:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:45.873 22:33:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:45.873 22:33:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:45.873 22:33:46 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:45.873 22:33:46 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:45.873 22:33:46 -- target/host_management.sh@104 -- # nvmftestinit 00:13:45.873 22:33:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:45.873 22:33:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.873 22:33:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:45.873 22:33:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:45.873 22:33:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:45.873 22:33:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.873 22:33:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.873 22:33:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.873 22:33:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:45.873 22:33:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:45.873 22:33:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:45.873 22:33:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:45.873 22:33:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:45.873 22:33:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:45.873 22:33:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.873 22:33:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.873 22:33:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:45.873 22:33:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:45.873 22:33:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:45.873 22:33:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:45.873 22:33:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:45.873 22:33:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.873 22:33:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:45.873 22:33:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:45.873 22:33:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:45.873 22:33:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:45.873 22:33:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:45.873 22:33:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:45.873 Cannot find device "nvmf_tgt_br" 00:13:45.873 22:33:46 -- nvmf/common.sh@154 -- # true 00:13:45.873 22:33:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:45.873 Cannot find device "nvmf_tgt_br2" 00:13:45.873 22:33:46 -- nvmf/common.sh@155 -- # true 00:13:45.873 22:33:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:45.873 22:33:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:45.873 Cannot find device "nvmf_tgt_br" 00:13:45.873 22:33:46 -- nvmf/common.sh@157 -- # true 00:13:45.873 22:33:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:45.873 Cannot find device "nvmf_tgt_br2" 00:13:45.873 22:33:46 -- nvmf/common.sh@158 -- # true 00:13:45.873 22:33:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:45.873 22:33:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:45.873 22:33:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:45.873 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:45.873 22:33:46 -- nvmf/common.sh@161 -- # true 00:13:45.873 22:33:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:45.873 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:45.873 22:33:46 -- nvmf/common.sh@162 -- # true 00:13:45.873 22:33:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:45.873 22:33:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:45.873 22:33:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:45.873 22:33:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:45.873 22:33:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:45.873 22:33:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:45.873 22:33:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:45.873 22:33:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:46.132 22:33:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:46.132 22:33:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:46.132 22:33:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:46.132 22:33:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:46.132 22:33:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:46.132 22:33:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:46.132 22:33:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:46.132 22:33:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:46.132 22:33:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:46.132 22:33:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:46.132 22:33:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:46.132 22:33:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:46.132 22:33:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:46.132 22:33:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:46.132 22:33:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:46.132 22:33:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:46.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:46.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:13:46.132 00:13:46.132 --- 10.0.0.2 ping statistics --- 00:13:46.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.132 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:13:46.132 22:33:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:46.132 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:46.132 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:13:46.132 00:13:46.132 --- 10.0.0.3 ping statistics --- 00:13:46.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.132 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:13:46.132 22:33:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:46.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:46.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:13:46.132 00:13:46.132 --- 10.0.0.1 ping statistics --- 00:13:46.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.132 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:13:46.132 22:33:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:46.132 22:33:46 -- nvmf/common.sh@421 -- # return 0 00:13:46.132 22:33:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:46.132 22:33:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:46.132 22:33:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:46.132 22:33:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:46.132 22:33:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:46.132 22:33:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:46.132 22:33:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:46.132 22:33:46 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:13:46.132 22:33:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:46.132 22:33:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:46.132 22:33:46 -- common/autotest_common.sh@10 -- # set +x 00:13:46.132 ************************************ 00:13:46.132 START TEST nvmf_host_management 00:13:46.132 ************************************ 00:13:46.132 22:33:46 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:13:46.132 22:33:46 -- target/host_management.sh@69 -- # starttarget 00:13:46.132 22:33:46 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:46.132 22:33:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:46.132 22:33:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:46.132 22:33:46 -- common/autotest_common.sh@10 -- # set +x 00:13:46.132 22:33:46 -- nvmf/common.sh@469 -- # nvmfpid=82622 00:13:46.132 22:33:46 -- nvmf/common.sh@470 -- # waitforlisten 82622 00:13:46.132 22:33:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:46.132 22:33:46 -- common/autotest_common.sh@829 -- # '[' -z 82622 ']' 00:13:46.132 22:33:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.132 22:33:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:46.132 22:33:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.132 22:33:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:46.132 22:33:46 -- common/autotest_common.sh@10 -- # set +x 00:13:46.132 [2024-11-20 22:33:46.806730] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:46.132 [2024-11-20 22:33:46.806826] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.391 [2024-11-20 22:33:46.947076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:46.391 [2024-11-20 22:33:47.008858] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:46.391 [2024-11-20 22:33:47.009064] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.391 [2024-11-20 22:33:47.009078] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.391 [2024-11-20 22:33:47.009087] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.391 [2024-11-20 22:33:47.009246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.391 [2024-11-20 22:33:47.009881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:46.391 [2024-11-20 22:33:47.009888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.391 [2024-11-20 22:33:47.009773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:47.326 22:33:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.326 22:33:47 -- common/autotest_common.sh@862 -- # return 0 00:13:47.326 22:33:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:47.326 22:33:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:47.326 22:33:47 -- common/autotest_common.sh@10 -- # set +x 00:13:47.326 22:33:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.326 22:33:47 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:47.326 22:33:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.326 22:33:47 -- common/autotest_common.sh@10 -- # set +x 00:13:47.326 [2024-11-20 22:33:47.906186] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.326 22:33:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.326 22:33:47 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:47.326 22:33:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:47.326 22:33:47 -- common/autotest_common.sh@10 -- # set +x 00:13:47.326 22:33:47 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:47.326 22:33:47 -- target/host_management.sh@23 -- # cat 00:13:47.326 22:33:47 -- target/host_management.sh@30 -- # rpc_cmd 00:13:47.326 22:33:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.326 22:33:47 -- common/autotest_common.sh@10 -- # set +x 00:13:47.326 Malloc0 00:13:47.326 [2024-11-20 22:33:47.981949] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.326 22:33:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.326 22:33:47 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:47.326 22:33:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:47.326 22:33:47 -- common/autotest_common.sh@10 -- # set +x 00:13:47.326 22:33:48 -- target/host_management.sh@73 -- # perfpid=82694 00:13:47.326 22:33:48 -- target/host_management.sh@74 -- # waitforlisten 82694 /var/tmp/bdevperf.sock 00:13:47.326 22:33:48 -- common/autotest_common.sh@829 -- # '[' -z 82694 ']' 00:13:47.326 22:33:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:47.326 22:33:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:47.326 22:33:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:47.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:47.326 22:33:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:47.326 22:33:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.326 22:33:48 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:47.326 22:33:48 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:47.326 22:33:48 -- nvmf/common.sh@520 -- # config=() 00:13:47.326 22:33:48 -- nvmf/common.sh@520 -- # local subsystem config 00:13:47.326 22:33:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:47.326 22:33:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:47.326 { 00:13:47.326 "params": { 00:13:47.326 "name": "Nvme$subsystem", 00:13:47.326 "trtype": "$TEST_TRANSPORT", 00:13:47.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:47.326 "adrfam": "ipv4", 00:13:47.326 "trsvcid": "$NVMF_PORT", 00:13:47.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:47.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:47.326 "hdgst": ${hdgst:-false}, 00:13:47.326 "ddgst": ${ddgst:-false} 00:13:47.326 }, 00:13:47.326 "method": "bdev_nvme_attach_controller" 00:13:47.326 } 00:13:47.326 EOF 00:13:47.326 )") 00:13:47.326 22:33:48 -- nvmf/common.sh@542 -- # cat 00:13:47.326 22:33:48 -- nvmf/common.sh@544 -- # jq . 00:13:47.326 22:33:48 -- nvmf/common.sh@545 -- # IFS=, 00:13:47.326 22:33:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:47.326 "params": { 00:13:47.326 "name": "Nvme0", 00:13:47.326 "trtype": "tcp", 00:13:47.326 "traddr": "10.0.0.2", 00:13:47.326 "adrfam": "ipv4", 00:13:47.326 "trsvcid": "4420", 00:13:47.326 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:47.326 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:47.326 "hdgst": false, 00:13:47.326 "ddgst": false 00:13:47.326 }, 00:13:47.326 "method": "bdev_nvme_attach_controller" 00:13:47.326 }' 00:13:47.611 [2024-11-20 22:33:48.090986] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:47.611 [2024-11-20 22:33:48.091066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82694 ] 00:13:47.611 [2024-11-20 22:33:48.229467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.611 [2024-11-20 22:33:48.304899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.898 Running I/O for 10 seconds... 00:13:48.471 22:33:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:48.471 22:33:49 -- common/autotest_common.sh@862 -- # return 0 00:13:48.471 22:33:49 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:48.471 22:33:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.471 22:33:49 -- common/autotest_common.sh@10 -- # set +x 00:13:48.471 22:33:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.471 22:33:49 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:48.471 22:33:49 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:48.471 22:33:49 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:48.471 22:33:49 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:48.471 22:33:49 -- target/host_management.sh@52 -- # local ret=1 00:13:48.471 22:33:49 -- target/host_management.sh@53 -- # local i 00:13:48.471 22:33:49 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:48.471 22:33:49 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:48.471 22:33:49 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:48.471 22:33:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.471 22:33:49 -- common/autotest_common.sh@10 -- # set +x 00:13:48.471 22:33:49 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:48.471 22:33:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.471 22:33:49 -- target/host_management.sh@55 -- # read_io_count=2469 00:13:48.471 22:33:49 -- target/host_management.sh@58 -- # '[' 2469 -ge 100 ']' 00:13:48.471 22:33:49 -- target/host_management.sh@59 -- # ret=0 00:13:48.471 22:33:49 -- target/host_management.sh@60 -- # break 00:13:48.471 22:33:49 -- target/host_management.sh@64 -- # return 0 00:13:48.471 22:33:49 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:48.471 22:33:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.471 22:33:49 -- common/autotest_common.sh@10 -- # set +x 00:13:48.471 [2024-11-20 22:33:49.179925] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180014] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180024] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180039] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180047] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180055] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180062] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180070] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180078] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180085] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180092] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180099] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180106] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180113] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180120] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180128] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180135] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180150] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180158] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180174] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180183] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180190] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180198] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180205] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180212] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180220] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180227] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180233] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180241] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180248] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180256] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180263] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180272] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180302] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180309] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180325] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180333] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180350] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180357] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481530 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.180730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.471 [2024-11-20 22:33:49.180770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.471 [2024-11-20 22:33:49.180783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.471 [2024-11-20 22:33:49.180791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.471 [2024-11-20 22:33:49.180801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.471 [2024-11-20 22:33:49.180810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.471 [2024-11-20 22:33:49.180819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.471 [2024-11-20 22:33:49.180827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.471 [2024-11-20 22:33:49.180835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12352e0 is same with the state(5) to be set 00:13:48.471 [2024-11-20 22:33:49.181036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.471 [2024-11-20 22:33:49.181061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.471 [2024-11-20 22:33:49.181081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.471 [2024-11-20 22:33:49.181090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.471 [2024-11-20 22:33:49.181101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.471 [2024-11-20 22:33:49.181109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.471 [2024-11-20 22:33:49.181119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.471 [2024-11-20 22:33:49.181127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.471 [2024-11-20 22:33:49.181137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.471 [2024-11-20 22:33:49.181145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.471 [2024-11-20 22:33:49.181154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.471 [2024-11-20 22:33:49.181162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.472 [2024-11-20 22:33:49.181919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.472 [2024-11-20 22:33:49.181928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.473 [2024-11-20 22:33:49.181937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.473 [2024-11-20 22:33:49.181946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.473 [2024-11-20 22:33:49.181955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.473 [2024-11-20 22:33:49.181964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.473 [2024-11-20 22:33:49.181973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.473 [2024-11-20 22:33:49.181982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.473 [2024-11-20 22:33:49.181992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.473 [2024-11-20 22:33:49.182000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.473 [2024-11-20 22:33:49.182009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.473 [2024-11-20 22:33:49.182017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.473 [2024-11-20 22:33:49.182026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.473 [2024-11-20 22:33:49.182034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.473 [2024-11-20 22:33:49.182043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.473 [2024-11-20 22:33:49.182051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.473 [2024-11-20 22:33:49.182060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.473 [2024-11-20 22:33:49.182068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.473 [2024-11-20 22:33:49.182077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.473 [2024-11-20 22:33:49.182085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.473 [2024-11-20 22:33:49.182095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.473 [2024-11-20 22:33:49.182103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.473 [2024-11-20 22:33:49.182113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.473 [2024-11-20 22:33:49.182126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.473 [2024-11-20 22:33:49.182136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.473 [2024-11-20 22:33:49.182145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.473 [2024-11-20 22:33:49.182155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.473 [2024-11-20 22:33:49.182163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.473 [2024-11-20 22:33:49.182173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.473 [2024-11-20 22:33:49.182181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.473 [2024-11-20 22:33:49.182191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.473 [2024-11-20 22:33:49.182199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.473 [2024-11-20 22:33:49.182208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.473 [2024-11-20 22:33:49.182216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.473 [2024-11-20 22:33:49.182226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.473 [2024-11-20 22:33:49.182234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.473 [2024-11-20 22:33:49.182244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.473 [2024-11-20 22:33:49.182252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.473 [2024-11-20 22:33:49.182261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ce7c0 is same with the state(5) to be set 00:13:48.473 [2024-11-20 22:33:49.182376] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11ce7c0 was disconnected and freed. reset controller. 00:13:48.473 [2024-11-20 22:33:49.183379] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:48.473 22:33:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.473 22:33:49 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:48.473 22:33:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.473 task offset: 79616 on job bdev=Nvme0n1 fails 00:13:48.473 00:13:48.473 Latency(us) 00:13:48.473 [2024-11-20T22:33:49.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.473 [2024-11-20T22:33:49.207Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:48.473 [2024-11-20T22:33:49.207Z] Job: Nvme0n1 ended in about 0.67 seconds with error 00:13:48.473 Verification LBA range: start 0x0 length 0x400 00:13:48.473 Nvme0n1 : 0.67 3998.18 249.89 95.80 0.00 15382.33 2055.45 21686.46 00:13:48.473 [2024-11-20T22:33:49.207Z] =================================================================================================================== 00:13:48.473 [2024-11-20T22:33:49.207Z] Total : 3998.18 249.89 95.80 0.00 15382.33 2055.45 21686.46 00:13:48.473 22:33:49 -- common/autotest_common.sh@10 -- # set +x 00:13:48.473 [2024-11-20 22:33:49.185010] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:48.473 [2024-11-20 22:33:49.185039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12352e0 (9): Bad file descriptor 00:13:48.473 22:33:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.473 22:33:49 -- target/host_management.sh@87 -- # sleep 1 00:13:48.473 [2024-11-20 22:33:49.194856] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:49.845 22:33:50 -- target/host_management.sh@91 -- # kill -9 82694 00:13:49.845 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (82694) - No such process 00:13:49.845 22:33:50 -- target/host_management.sh@91 -- # true 00:13:49.845 22:33:50 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:49.845 22:33:50 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:49.845 22:33:50 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:49.845 22:33:50 -- nvmf/common.sh@520 -- # config=() 00:13:49.845 22:33:50 -- nvmf/common.sh@520 -- # local subsystem config 00:13:49.845 22:33:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:49.845 22:33:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:49.845 { 00:13:49.845 "params": { 00:13:49.845 "name": "Nvme$subsystem", 00:13:49.845 "trtype": "$TEST_TRANSPORT", 00:13:49.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:49.845 "adrfam": "ipv4", 00:13:49.846 "trsvcid": "$NVMF_PORT", 00:13:49.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:49.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:49.846 "hdgst": ${hdgst:-false}, 00:13:49.846 "ddgst": ${ddgst:-false} 00:13:49.846 }, 00:13:49.846 "method": "bdev_nvme_attach_controller" 00:13:49.846 } 00:13:49.846 EOF 00:13:49.846 )") 00:13:49.846 22:33:50 -- nvmf/common.sh@542 -- # cat 00:13:49.846 22:33:50 -- nvmf/common.sh@544 -- # jq . 00:13:49.846 22:33:50 -- nvmf/common.sh@545 -- # IFS=, 00:13:49.846 22:33:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:49.846 "params": { 00:13:49.846 "name": "Nvme0", 00:13:49.846 "trtype": "tcp", 00:13:49.846 "traddr": "10.0.0.2", 00:13:49.846 "adrfam": "ipv4", 00:13:49.846 "trsvcid": "4420", 00:13:49.846 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:49.846 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:49.846 "hdgst": false, 00:13:49.846 "ddgst": false 00:13:49.846 }, 00:13:49.846 "method": "bdev_nvme_attach_controller" 00:13:49.846 }' 00:13:49.846 [2024-11-20 22:33:50.254921] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:49.846 [2024-11-20 22:33:50.255006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82750 ] 00:13:49.846 [2024-11-20 22:33:50.394157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.846 [2024-11-20 22:33:50.459644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.103 Running I/O for 1 seconds... 00:13:51.037 00:13:51.037 Latency(us) 00:13:51.037 [2024-11-20T22:33:51.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.037 [2024-11-20T22:33:51.771Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:51.037 Verification LBA range: start 0x0 length 0x400 00:13:51.037 Nvme0n1 : 1.01 4127.89 257.99 0.00 0.00 15245.03 1467.11 21686.46 00:13:51.037 [2024-11-20T22:33:51.771Z] =================================================================================================================== 00:13:51.037 [2024-11-20T22:33:51.771Z] Total : 4127.89 257.99 0.00 0.00 15245.03 1467.11 21686.46 00:13:51.295 22:33:51 -- target/host_management.sh@101 -- # stoptarget 00:13:51.295 22:33:51 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:51.295 22:33:51 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:13:51.295 22:33:51 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:51.295 22:33:51 -- target/host_management.sh@40 -- # nvmftestfini 00:13:51.295 22:33:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:51.295 22:33:51 -- nvmf/common.sh@116 -- # sync 00:13:51.553 22:33:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:51.553 22:33:52 -- nvmf/common.sh@119 -- # set +e 00:13:51.553 22:33:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:51.553 22:33:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:51.553 rmmod nvme_tcp 00:13:51.553 rmmod nvme_fabrics 00:13:51.553 rmmod nvme_keyring 00:13:51.553 22:33:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:51.553 22:33:52 -- nvmf/common.sh@123 -- # set -e 00:13:51.553 22:33:52 -- nvmf/common.sh@124 -- # return 0 00:13:51.553 22:33:52 -- nvmf/common.sh@477 -- # '[' -n 82622 ']' 00:13:51.553 22:33:52 -- nvmf/common.sh@478 -- # killprocess 82622 00:13:51.553 22:33:52 -- common/autotest_common.sh@936 -- # '[' -z 82622 ']' 00:13:51.553 22:33:52 -- common/autotest_common.sh@940 -- # kill -0 82622 00:13:51.553 22:33:52 -- common/autotest_common.sh@941 -- # uname 00:13:51.553 22:33:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:51.553 22:33:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82622 00:13:51.553 22:33:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:51.553 22:33:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:51.553 killing process with pid 82622 00:13:51.553 22:33:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82622' 00:13:51.553 22:33:52 -- common/autotest_common.sh@955 -- # kill 82622 00:13:51.553 22:33:52 -- common/autotest_common.sh@960 -- # wait 82622 00:13:51.812 [2024-11-20 22:33:52.333736] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:51.812 22:33:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:51.812 22:33:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:51.812 22:33:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:51.812 22:33:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:51.812 22:33:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:51.812 22:33:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.812 22:33:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.812 22:33:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.812 22:33:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:51.812 00:13:51.812 real 0m5.651s 00:13:51.812 user 0m23.960s 00:13:51.812 sys 0m1.378s 00:13:51.812 22:33:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:51.812 22:33:52 -- common/autotest_common.sh@10 -- # set +x 00:13:51.812 ************************************ 00:13:51.812 END TEST nvmf_host_management 00:13:51.812 ************************************ 00:13:51.812 22:33:52 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:13:51.812 00:13:51.812 real 0m6.263s 00:13:51.812 user 0m24.171s 00:13:51.812 sys 0m1.638s 00:13:51.812 22:33:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:51.812 22:33:52 -- common/autotest_common.sh@10 -- # set +x 00:13:51.812 ************************************ 00:13:51.812 END TEST nvmf_host_management 00:13:51.812 ************************************ 00:13:51.812 22:33:52 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:51.812 22:33:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:51.812 22:33:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:51.812 22:33:52 -- common/autotest_common.sh@10 -- # set +x 00:13:51.812 ************************************ 00:13:51.812 START TEST nvmf_lvol 00:13:51.812 ************************************ 00:13:51.812 22:33:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:52.071 * Looking for test storage... 00:13:52.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:52.071 22:33:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:52.071 22:33:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:52.071 22:33:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:52.071 22:33:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:52.071 22:33:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:52.071 22:33:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:52.071 22:33:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:52.071 22:33:52 -- scripts/common.sh@335 -- # IFS=.-: 00:13:52.071 22:33:52 -- scripts/common.sh@335 -- # read -ra ver1 00:13:52.071 22:33:52 -- scripts/common.sh@336 -- # IFS=.-: 00:13:52.071 22:33:52 -- scripts/common.sh@336 -- # read -ra ver2 00:13:52.071 22:33:52 -- scripts/common.sh@337 -- # local 'op=<' 00:13:52.071 22:33:52 -- scripts/common.sh@339 -- # ver1_l=2 00:13:52.071 22:33:52 -- scripts/common.sh@340 -- # ver2_l=1 00:13:52.072 22:33:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:52.072 22:33:52 -- scripts/common.sh@343 -- # case "$op" in 00:13:52.072 22:33:52 -- scripts/common.sh@344 -- # : 1 00:13:52.072 22:33:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:52.072 22:33:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:52.072 22:33:52 -- scripts/common.sh@364 -- # decimal 1 00:13:52.072 22:33:52 -- scripts/common.sh@352 -- # local d=1 00:13:52.072 22:33:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:52.072 22:33:52 -- scripts/common.sh@354 -- # echo 1 00:13:52.072 22:33:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:52.072 22:33:52 -- scripts/common.sh@365 -- # decimal 2 00:13:52.072 22:33:52 -- scripts/common.sh@352 -- # local d=2 00:13:52.072 22:33:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:52.072 22:33:52 -- scripts/common.sh@354 -- # echo 2 00:13:52.072 22:33:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:52.072 22:33:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:52.072 22:33:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:52.072 22:33:52 -- scripts/common.sh@367 -- # return 0 00:13:52.072 22:33:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:52.072 22:33:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:52.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.072 --rc genhtml_branch_coverage=1 00:13:52.072 --rc genhtml_function_coverage=1 00:13:52.072 --rc genhtml_legend=1 00:13:52.072 --rc geninfo_all_blocks=1 00:13:52.072 --rc geninfo_unexecuted_blocks=1 00:13:52.072 00:13:52.072 ' 00:13:52.072 22:33:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:52.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.072 --rc genhtml_branch_coverage=1 00:13:52.072 --rc genhtml_function_coverage=1 00:13:52.072 --rc genhtml_legend=1 00:13:52.072 --rc geninfo_all_blocks=1 00:13:52.072 --rc geninfo_unexecuted_blocks=1 00:13:52.072 00:13:52.072 ' 00:13:52.072 22:33:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:52.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.072 --rc genhtml_branch_coverage=1 00:13:52.072 --rc genhtml_function_coverage=1 00:13:52.072 --rc genhtml_legend=1 00:13:52.072 --rc geninfo_all_blocks=1 00:13:52.072 --rc geninfo_unexecuted_blocks=1 00:13:52.072 00:13:52.072 ' 00:13:52.072 22:33:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:52.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.072 --rc genhtml_branch_coverage=1 00:13:52.072 --rc genhtml_function_coverage=1 00:13:52.072 --rc genhtml_legend=1 00:13:52.072 --rc geninfo_all_blocks=1 00:13:52.072 --rc geninfo_unexecuted_blocks=1 00:13:52.072 00:13:52.072 ' 00:13:52.072 22:33:52 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:52.072 22:33:52 -- nvmf/common.sh@7 -- # uname -s 00:13:52.072 22:33:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.072 22:33:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.072 22:33:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.072 22:33:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.072 22:33:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.072 22:33:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.072 22:33:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.072 22:33:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.072 22:33:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.072 22:33:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.072 22:33:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:13:52.072 22:33:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:13:52.072 22:33:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.072 22:33:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.072 22:33:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:52.072 22:33:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:52.072 22:33:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.072 22:33:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.072 22:33:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.072 22:33:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.072 22:33:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.072 22:33:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.072 22:33:52 -- paths/export.sh@5 -- # export PATH 00:13:52.072 22:33:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.072 22:33:52 -- nvmf/common.sh@46 -- # : 0 00:13:52.072 22:33:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:52.072 22:33:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:52.072 22:33:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:52.072 22:33:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.072 22:33:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.072 22:33:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:52.072 22:33:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:52.072 22:33:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:52.072 22:33:52 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:52.072 22:33:52 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:52.072 22:33:52 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:52.072 22:33:52 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:52.072 22:33:52 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:52.072 22:33:52 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:52.072 22:33:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:52.073 22:33:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.073 22:33:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:52.073 22:33:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:52.073 22:33:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:52.073 22:33:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.073 22:33:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:52.073 22:33:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.073 22:33:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:52.073 22:33:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:52.073 22:33:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:52.073 22:33:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:52.073 22:33:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:52.073 22:33:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:52.073 22:33:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.073 22:33:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:52.073 22:33:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:52.073 22:33:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:52.073 22:33:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:52.073 22:33:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:52.073 22:33:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:52.073 22:33:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.073 22:33:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:52.073 22:33:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:52.073 22:33:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:52.073 22:33:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:52.073 22:33:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:52.073 22:33:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:52.073 Cannot find device "nvmf_tgt_br" 00:13:52.073 22:33:52 -- nvmf/common.sh@154 -- # true 00:13:52.073 22:33:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:52.073 Cannot find device "nvmf_tgt_br2" 00:13:52.073 22:33:52 -- nvmf/common.sh@155 -- # true 00:13:52.073 22:33:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:52.073 22:33:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:52.073 Cannot find device "nvmf_tgt_br" 00:13:52.073 22:33:52 -- nvmf/common.sh@157 -- # true 00:13:52.073 22:33:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:52.073 Cannot find device "nvmf_tgt_br2" 00:13:52.073 22:33:52 -- nvmf/common.sh@158 -- # true 00:13:52.073 22:33:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:52.331 22:33:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:52.331 22:33:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:52.331 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:52.331 22:33:52 -- nvmf/common.sh@161 -- # true 00:13:52.331 22:33:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:52.331 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:52.332 22:33:52 -- nvmf/common.sh@162 -- # true 00:13:52.332 22:33:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:52.332 22:33:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:52.332 22:33:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:52.332 22:33:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:52.332 22:33:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:52.332 22:33:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:52.332 22:33:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:52.332 22:33:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:52.332 22:33:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:52.332 22:33:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:52.332 22:33:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:52.332 22:33:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:52.332 22:33:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:52.332 22:33:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:52.332 22:33:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:52.332 22:33:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:52.332 22:33:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:52.332 22:33:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:52.332 22:33:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:52.332 22:33:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:52.332 22:33:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:52.332 22:33:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:52.332 22:33:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:52.332 22:33:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:52.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:13:52.332 00:13:52.332 --- 10.0.0.2 ping statistics --- 00:13:52.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.332 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:13:52.332 22:33:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:52.332 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:52.332 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:13:52.332 00:13:52.332 --- 10.0.0.3 ping statistics --- 00:13:52.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.332 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:13:52.332 22:33:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:52.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:13:52.332 00:13:52.332 --- 10.0.0.1 ping statistics --- 00:13:52.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.332 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:13:52.332 22:33:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.332 22:33:53 -- nvmf/common.sh@421 -- # return 0 00:13:52.332 22:33:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:52.332 22:33:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.332 22:33:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:52.332 22:33:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:52.332 22:33:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.332 22:33:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:52.332 22:33:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:52.332 22:33:53 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:52.332 22:33:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:52.332 22:33:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:52.332 22:33:53 -- common/autotest_common.sh@10 -- # set +x 00:13:52.332 22:33:53 -- nvmf/common.sh@469 -- # nvmfpid=82981 00:13:52.332 22:33:53 -- nvmf/common.sh@470 -- # waitforlisten 82981 00:13:52.332 22:33:53 -- common/autotest_common.sh@829 -- # '[' -z 82981 ']' 00:13:52.332 22:33:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.332 22:33:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:52.332 22:33:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:52.332 22:33:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.332 22:33:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:52.332 22:33:53 -- common/autotest_common.sh@10 -- # set +x 00:13:52.590 [2024-11-20 22:33:53.115399] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:52.590 [2024-11-20 22:33:53.115491] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.590 [2024-11-20 22:33:53.257481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:52.848 [2024-11-20 22:33:53.336536] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:52.849 [2024-11-20 22:33:53.336736] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.849 [2024-11-20 22:33:53.336754] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.849 [2024-11-20 22:33:53.336766] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.849 [2024-11-20 22:33:53.336956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.849 [2024-11-20 22:33:53.337646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.849 [2024-11-20 22:33:53.337735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.415 22:33:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:53.415 22:33:54 -- common/autotest_common.sh@862 -- # return 0 00:13:53.415 22:33:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:53.415 22:33:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:53.415 22:33:54 -- common/autotest_common.sh@10 -- # set +x 00:13:53.415 22:33:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.415 22:33:54 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:53.673 [2024-11-20 22:33:54.399230] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.931 22:33:54 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:54.190 22:33:54 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:54.190 22:33:54 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:54.448 22:33:54 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:54.448 22:33:54 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:54.449 22:33:55 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:54.706 22:33:55 -- target/nvmf_lvol.sh@29 -- # lvs=52c85957-fdd0-4a63-84de-256a4bfd1ffa 00:13:54.706 22:33:55 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 52c85957-fdd0-4a63-84de-256a4bfd1ffa lvol 20 00:13:54.964 22:33:55 -- target/nvmf_lvol.sh@32 -- # lvol=574836c0-8a99-4781-bf83-b294bff8f314 00:13:54.964 22:33:55 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:55.529 22:33:55 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 574836c0-8a99-4781-bf83-b294bff8f314 00:13:55.529 22:33:56 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:55.787 [2024-11-20 22:33:56.423079] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.788 22:33:56 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:56.045 22:33:56 -- target/nvmf_lvol.sh@42 -- # perf_pid=83129 00:13:56.045 22:33:56 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:56.045 22:33:56 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:57.418 22:33:57 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 574836c0-8a99-4781-bf83-b294bff8f314 MY_SNAPSHOT 00:13:57.418 22:33:58 -- target/nvmf_lvol.sh@47 -- # snapshot=523c40ef-32ba-4f99-8686-c01c1fe26f5e 00:13:57.418 22:33:58 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 574836c0-8a99-4781-bf83-b294bff8f314 30 00:13:57.676 22:33:58 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 523c40ef-32ba-4f99-8686-c01c1fe26f5e MY_CLONE 00:13:57.934 22:33:58 -- target/nvmf_lvol.sh@49 -- # clone=703ba2ed-74d5-4fd4-8213-59d9acfb27bf 00:13:57.934 22:33:58 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 703ba2ed-74d5-4fd4-8213-59d9acfb27bf 00:13:58.866 22:33:59 -- target/nvmf_lvol.sh@53 -- # wait 83129 00:14:07.007 Initializing NVMe Controllers 00:14:07.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:07.007 Controller IO queue size 128, less than required. 00:14:07.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:07.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:07.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:07.007 Initialization complete. Launching workers. 00:14:07.007 ======================================================== 00:14:07.007 Latency(us) 00:14:07.007 Device Information : IOPS MiB/s Average min max 00:14:07.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7705.00 30.10 16627.59 2187.12 87270.67 00:14:07.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7808.60 30.50 16396.46 334.86 90449.09 00:14:07.007 ======================================================== 00:14:07.007 Total : 15513.60 60.60 16511.25 334.86 90449.09 00:14:07.007 00:14:07.007 22:34:07 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:07.007 22:34:07 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 574836c0-8a99-4781-bf83-b294bff8f314 00:14:07.007 22:34:07 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 52c85957-fdd0-4a63-84de-256a4bfd1ffa 00:14:07.266 22:34:07 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:07.266 22:34:07 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:07.266 22:34:07 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:07.266 22:34:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:07.266 22:34:07 -- nvmf/common.sh@116 -- # sync 00:14:07.266 22:34:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:07.266 22:34:07 -- nvmf/common.sh@119 -- # set +e 00:14:07.266 22:34:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:07.266 22:34:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:07.266 rmmod nvme_tcp 00:14:07.266 rmmod nvme_fabrics 00:14:07.266 rmmod nvme_keyring 00:14:07.266 22:34:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:07.266 22:34:07 -- nvmf/common.sh@123 -- # set -e 00:14:07.266 22:34:07 -- nvmf/common.sh@124 -- # return 0 00:14:07.266 22:34:07 -- nvmf/common.sh@477 -- # '[' -n 82981 ']' 00:14:07.266 22:34:07 -- nvmf/common.sh@478 -- # killprocess 82981 00:14:07.266 22:34:07 -- common/autotest_common.sh@936 -- # '[' -z 82981 ']' 00:14:07.266 22:34:07 -- common/autotest_common.sh@940 -- # kill -0 82981 00:14:07.266 22:34:07 -- common/autotest_common.sh@941 -- # uname 00:14:07.266 22:34:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:07.266 22:34:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82981 00:14:07.525 killing process with pid 82981 00:14:07.525 22:34:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:07.525 22:34:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:07.525 22:34:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82981' 00:14:07.525 22:34:08 -- common/autotest_common.sh@955 -- # kill 82981 00:14:07.525 22:34:08 -- common/autotest_common.sh@960 -- # wait 82981 00:14:07.783 22:34:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:07.783 22:34:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:07.783 22:34:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:07.783 22:34:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:07.783 22:34:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:07.783 22:34:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.783 22:34:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.783 22:34:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.783 22:34:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:07.783 00:14:07.783 real 0m15.842s 00:14:07.783 user 1m6.149s 00:14:07.783 sys 0m3.618s 00:14:07.783 22:34:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:07.783 22:34:08 -- common/autotest_common.sh@10 -- # set +x 00:14:07.783 ************************************ 00:14:07.783 END TEST nvmf_lvol 00:14:07.783 ************************************ 00:14:07.783 22:34:08 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:07.783 22:34:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:07.783 22:34:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:07.783 22:34:08 -- common/autotest_common.sh@10 -- # set +x 00:14:07.783 ************************************ 00:14:07.783 START TEST nvmf_lvs_grow 00:14:07.783 ************************************ 00:14:07.783 22:34:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:07.783 * Looking for test storage... 00:14:07.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:07.783 22:34:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:07.783 22:34:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:07.783 22:34:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:08.042 22:34:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:08.042 22:34:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:08.042 22:34:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:08.042 22:34:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:08.043 22:34:08 -- scripts/common.sh@335 -- # IFS=.-: 00:14:08.043 22:34:08 -- scripts/common.sh@335 -- # read -ra ver1 00:14:08.043 22:34:08 -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.043 22:34:08 -- scripts/common.sh@336 -- # read -ra ver2 00:14:08.043 22:34:08 -- scripts/common.sh@337 -- # local 'op=<' 00:14:08.043 22:34:08 -- scripts/common.sh@339 -- # ver1_l=2 00:14:08.043 22:34:08 -- scripts/common.sh@340 -- # ver2_l=1 00:14:08.043 22:34:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:08.043 22:34:08 -- scripts/common.sh@343 -- # case "$op" in 00:14:08.043 22:34:08 -- scripts/common.sh@344 -- # : 1 00:14:08.043 22:34:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:08.043 22:34:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.043 22:34:08 -- scripts/common.sh@364 -- # decimal 1 00:14:08.043 22:34:08 -- scripts/common.sh@352 -- # local d=1 00:14:08.043 22:34:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.043 22:34:08 -- scripts/common.sh@354 -- # echo 1 00:14:08.043 22:34:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:08.043 22:34:08 -- scripts/common.sh@365 -- # decimal 2 00:14:08.043 22:34:08 -- scripts/common.sh@352 -- # local d=2 00:14:08.043 22:34:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.043 22:34:08 -- scripts/common.sh@354 -- # echo 2 00:14:08.043 22:34:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:08.043 22:34:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:08.043 22:34:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:08.043 22:34:08 -- scripts/common.sh@367 -- # return 0 00:14:08.043 22:34:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.043 22:34:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:08.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.043 --rc genhtml_branch_coverage=1 00:14:08.043 --rc genhtml_function_coverage=1 00:14:08.043 --rc genhtml_legend=1 00:14:08.043 --rc geninfo_all_blocks=1 00:14:08.043 --rc geninfo_unexecuted_blocks=1 00:14:08.043 00:14:08.043 ' 00:14:08.043 22:34:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:08.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.043 --rc genhtml_branch_coverage=1 00:14:08.043 --rc genhtml_function_coverage=1 00:14:08.043 --rc genhtml_legend=1 00:14:08.043 --rc geninfo_all_blocks=1 00:14:08.043 --rc geninfo_unexecuted_blocks=1 00:14:08.043 00:14:08.043 ' 00:14:08.043 22:34:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:08.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.043 --rc genhtml_branch_coverage=1 00:14:08.043 --rc genhtml_function_coverage=1 00:14:08.043 --rc genhtml_legend=1 00:14:08.043 --rc geninfo_all_blocks=1 00:14:08.043 --rc geninfo_unexecuted_blocks=1 00:14:08.043 00:14:08.043 ' 00:14:08.043 22:34:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:08.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.043 --rc genhtml_branch_coverage=1 00:14:08.043 --rc genhtml_function_coverage=1 00:14:08.043 --rc genhtml_legend=1 00:14:08.043 --rc geninfo_all_blocks=1 00:14:08.043 --rc geninfo_unexecuted_blocks=1 00:14:08.043 00:14:08.043 ' 00:14:08.043 22:34:08 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:08.043 22:34:08 -- nvmf/common.sh@7 -- # uname -s 00:14:08.043 22:34:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.043 22:34:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.043 22:34:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.043 22:34:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.043 22:34:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.043 22:34:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.043 22:34:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.043 22:34:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.043 22:34:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.043 22:34:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.043 22:34:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:14:08.043 22:34:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:14:08.043 22:34:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.043 22:34:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.043 22:34:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:08.043 22:34:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:08.043 22:34:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.043 22:34:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.043 22:34:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.043 22:34:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.043 22:34:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.043 22:34:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.043 22:34:08 -- paths/export.sh@5 -- # export PATH 00:14:08.043 22:34:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.043 22:34:08 -- nvmf/common.sh@46 -- # : 0 00:14:08.043 22:34:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:08.043 22:34:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:08.043 22:34:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:08.043 22:34:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.043 22:34:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.043 22:34:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:08.043 22:34:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:08.043 22:34:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:08.043 22:34:08 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:08.043 22:34:08 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:08.043 22:34:08 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:08.043 22:34:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:08.043 22:34:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.043 22:34:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:08.043 22:34:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:08.043 22:34:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:08.043 22:34:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.043 22:34:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:08.043 22:34:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.043 22:34:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:08.043 22:34:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:08.043 22:34:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:08.043 22:34:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:08.043 22:34:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:08.043 22:34:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:08.043 22:34:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:08.043 22:34:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:08.043 22:34:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:08.043 22:34:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:08.043 22:34:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:08.043 22:34:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:08.043 22:34:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:08.043 22:34:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:08.043 22:34:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:08.043 22:34:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:08.043 22:34:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:08.043 22:34:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:08.043 22:34:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:08.043 22:34:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:08.043 Cannot find device "nvmf_tgt_br" 00:14:08.043 22:34:08 -- nvmf/common.sh@154 -- # true 00:14:08.043 22:34:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:08.043 Cannot find device "nvmf_tgt_br2" 00:14:08.043 22:34:08 -- nvmf/common.sh@155 -- # true 00:14:08.043 22:34:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:08.043 22:34:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:08.043 Cannot find device "nvmf_tgt_br" 00:14:08.043 22:34:08 -- nvmf/common.sh@157 -- # true 00:14:08.043 22:34:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:08.043 Cannot find device "nvmf_tgt_br2" 00:14:08.043 22:34:08 -- nvmf/common.sh@158 -- # true 00:14:08.043 22:34:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:08.043 22:34:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:08.043 22:34:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:08.043 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:08.043 22:34:08 -- nvmf/common.sh@161 -- # true 00:14:08.043 22:34:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:08.043 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:08.043 22:34:08 -- nvmf/common.sh@162 -- # true 00:14:08.044 22:34:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:08.044 22:34:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:08.302 22:34:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:08.302 22:34:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:08.302 22:34:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:08.302 22:34:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:08.302 22:34:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:08.302 22:34:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:08.302 22:34:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:08.302 22:34:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:08.302 22:34:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:08.302 22:34:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:08.302 22:34:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:08.302 22:34:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:08.302 22:34:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:08.302 22:34:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:08.302 22:34:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:08.302 22:34:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:08.302 22:34:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:08.302 22:34:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:08.302 22:34:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:08.302 22:34:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:08.302 22:34:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:08.302 22:34:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:08.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:08.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:14:08.302 00:14:08.302 --- 10.0.0.2 ping statistics --- 00:14:08.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.302 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:14:08.302 22:34:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:08.302 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:08.302 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:14:08.302 00:14:08.302 --- 10.0.0.3 ping statistics --- 00:14:08.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.302 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:08.302 22:34:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:08.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:08.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:14:08.302 00:14:08.302 --- 10.0.0.1 ping statistics --- 00:14:08.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.302 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:08.302 22:34:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:08.302 22:34:08 -- nvmf/common.sh@421 -- # return 0 00:14:08.302 22:34:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:08.302 22:34:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:08.302 22:34:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:08.302 22:34:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:08.302 22:34:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:08.302 22:34:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:08.302 22:34:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:08.302 22:34:08 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:08.302 22:34:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:08.302 22:34:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:08.303 22:34:08 -- common/autotest_common.sh@10 -- # set +x 00:14:08.303 22:34:08 -- nvmf/common.sh@469 -- # nvmfpid=83503 00:14:08.303 22:34:08 -- nvmf/common.sh@470 -- # waitforlisten 83503 00:14:08.303 22:34:08 -- common/autotest_common.sh@829 -- # '[' -z 83503 ']' 00:14:08.303 22:34:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.303 22:34:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:08.303 22:34:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.303 22:34:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:08.303 22:34:08 -- common/autotest_common.sh@10 -- # set +x 00:14:08.303 22:34:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:08.303 [2024-11-20 22:34:09.026857] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:08.303 [2024-11-20 22:34:09.026952] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.561 [2024-11-20 22:34:09.163232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.561 [2024-11-20 22:34:09.232142] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:08.561 [2024-11-20 22:34:09.232318] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.561 [2024-11-20 22:34:09.232332] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.561 [2024-11-20 22:34:09.232340] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.561 [2024-11-20 22:34:09.232367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.495 22:34:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:09.495 22:34:09 -- common/autotest_common.sh@862 -- # return 0 00:14:09.495 22:34:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:09.495 22:34:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:09.495 22:34:09 -- common/autotest_common.sh@10 -- # set +x 00:14:09.495 22:34:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.495 22:34:10 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:09.753 [2024-11-20 22:34:10.325769] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.753 22:34:10 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:09.753 22:34:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:09.753 22:34:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:09.753 22:34:10 -- common/autotest_common.sh@10 -- # set +x 00:14:09.753 ************************************ 00:14:09.753 START TEST lvs_grow_clean 00:14:09.753 ************************************ 00:14:09.753 22:34:10 -- common/autotest_common.sh@1114 -- # lvs_grow 00:14:09.753 22:34:10 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:09.753 22:34:10 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:09.753 22:34:10 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:09.753 22:34:10 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:09.753 22:34:10 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:09.753 22:34:10 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:09.753 22:34:10 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:09.753 22:34:10 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:09.753 22:34:10 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:10.012 22:34:10 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:10.012 22:34:10 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:10.271 22:34:10 -- target/nvmf_lvs_grow.sh@28 -- # lvs=c4ae6ec6-59dd-423e-96e1-4518ad52a7ee 00:14:10.271 22:34:10 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4ae6ec6-59dd-423e-96e1-4518ad52a7ee 00:14:10.271 22:34:10 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:10.529 22:34:11 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:10.529 22:34:11 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:10.529 22:34:11 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c4ae6ec6-59dd-423e-96e1-4518ad52a7ee lvol 150 00:14:10.788 22:34:11 -- target/nvmf_lvs_grow.sh@33 -- # lvol=16660088-a560-4e08-978a-a007baf74487 00:14:10.788 22:34:11 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:10.788 22:34:11 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:11.046 [2024-11-20 22:34:11.630915] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:11.046 [2024-11-20 22:34:11.630964] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:11.046 true 00:14:11.046 22:34:11 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:11.046 22:34:11 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4ae6ec6-59dd-423e-96e1-4518ad52a7ee 00:14:11.304 22:34:11 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:11.304 22:34:11 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:11.563 22:34:12 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 16660088-a560-4e08-978a-a007baf74487 00:14:11.821 22:34:12 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:12.079 [2024-11-20 22:34:12.571444] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:12.080 22:34:12 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:12.080 22:34:12 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:12.080 22:34:12 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83665 00:14:12.080 22:34:12 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:12.080 22:34:12 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83665 /var/tmp/bdevperf.sock 00:14:12.080 22:34:12 -- common/autotest_common.sh@829 -- # '[' -z 83665 ']' 00:14:12.080 22:34:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:12.080 22:34:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:12.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:12.080 22:34:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:12.080 22:34:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:12.080 22:34:12 -- common/autotest_common.sh@10 -- # set +x 00:14:12.339 [2024-11-20 22:34:12.822861] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:12.339 [2024-11-20 22:34:12.822936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83665 ] 00:14:12.339 [2024-11-20 22:34:12.957160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.339 [2024-11-20 22:34:13.024203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.274 22:34:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.274 22:34:13 -- common/autotest_common.sh@862 -- # return 0 00:14:13.274 22:34:13 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:13.274 Nvme0n1 00:14:13.533 22:34:14 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:13.533 [ 00:14:13.533 { 00:14:13.533 "aliases": [ 00:14:13.533 "16660088-a560-4e08-978a-a007baf74487" 00:14:13.533 ], 00:14:13.533 "assigned_rate_limits": { 00:14:13.533 "r_mbytes_per_sec": 0, 00:14:13.533 "rw_ios_per_sec": 0, 00:14:13.533 "rw_mbytes_per_sec": 0, 00:14:13.533 "w_mbytes_per_sec": 0 00:14:13.533 }, 00:14:13.533 "block_size": 4096, 00:14:13.533 "claimed": false, 00:14:13.533 "driver_specific": { 00:14:13.533 "mp_policy": "active_passive", 00:14:13.533 "nvme": [ 00:14:13.533 { 00:14:13.533 "ctrlr_data": { 00:14:13.533 "ana_reporting": false, 00:14:13.533 "cntlid": 1, 00:14:13.533 "firmware_revision": "24.01.1", 00:14:13.533 "model_number": "SPDK bdev Controller", 00:14:13.533 "multi_ctrlr": true, 00:14:13.533 "oacs": { 00:14:13.533 "firmware": 0, 00:14:13.533 "format": 0, 00:14:13.533 "ns_manage": 0, 00:14:13.533 "security": 0 00:14:13.533 }, 00:14:13.533 "serial_number": "SPDK0", 00:14:13.533 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:13.533 "vendor_id": "0x8086" 00:14:13.533 }, 00:14:13.533 "ns_data": { 00:14:13.533 "can_share": true, 00:14:13.533 "id": 1 00:14:13.533 }, 00:14:13.533 "trid": { 00:14:13.533 "adrfam": "IPv4", 00:14:13.533 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:13.533 "traddr": "10.0.0.2", 00:14:13.534 "trsvcid": "4420", 00:14:13.534 "trtype": "TCP" 00:14:13.534 }, 00:14:13.534 "vs": { 00:14:13.534 "nvme_version": "1.3" 00:14:13.534 } 00:14:13.534 } 00:14:13.534 ] 00:14:13.534 }, 00:14:13.534 "name": "Nvme0n1", 00:14:13.534 "num_blocks": 38912, 00:14:13.534 "product_name": "NVMe disk", 00:14:13.534 "supported_io_types": { 00:14:13.534 "abort": true, 00:14:13.534 "compare": true, 00:14:13.534 "compare_and_write": true, 00:14:13.534 "flush": true, 00:14:13.534 "nvme_admin": true, 00:14:13.534 "nvme_io": true, 00:14:13.534 "read": true, 00:14:13.534 "reset": true, 00:14:13.534 "unmap": true, 00:14:13.534 "write": true, 00:14:13.534 "write_zeroes": true 00:14:13.534 }, 00:14:13.534 "uuid": "16660088-a560-4e08-978a-a007baf74487", 00:14:13.534 "zoned": false 00:14:13.534 } 00:14:13.534 ] 00:14:13.534 22:34:14 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83707 00:14:13.534 22:34:14 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:13.534 22:34:14 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:13.792 Running I/O for 10 seconds... 00:14:14.728 Latency(us) 00:14:14.728 [2024-11-20T22:34:15.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.728 [2024-11-20T22:34:15.462Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:14.728 Nvme0n1 : 1.00 10480.00 40.94 0.00 0.00 0.00 0.00 0.00 00:14:14.728 [2024-11-20T22:34:15.462Z] =================================================================================================================== 00:14:14.728 [2024-11-20T22:34:15.462Z] Total : 10480.00 40.94 0.00 0.00 0.00 0.00 0.00 00:14:14.728 00:14:15.664 22:34:16 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c4ae6ec6-59dd-423e-96e1-4518ad52a7ee 00:14:15.664 [2024-11-20T22:34:16.398Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:15.664 Nvme0n1 : 2.00 10413.00 40.68 0.00 0.00 0.00 0.00 0.00 00:14:15.664 [2024-11-20T22:34:16.398Z] =================================================================================================================== 00:14:15.664 [2024-11-20T22:34:16.398Z] Total : 10413.00 40.68 0.00 0.00 0.00 0.00 0.00 00:14:15.664 00:14:15.923 true 00:14:15.923 22:34:16 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4ae6ec6-59dd-423e-96e1-4518ad52a7ee 00:14:15.923 22:34:16 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:16.181 22:34:16 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:16.181 22:34:16 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:16.181 22:34:16 -- target/nvmf_lvs_grow.sh@65 -- # wait 83707 00:14:16.748 [2024-11-20T22:34:17.482Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:16.748 Nvme0n1 : 3.00 10306.00 40.26 0.00 0.00 0.00 0.00 0.00 00:14:16.748 [2024-11-20T22:34:17.482Z] =================================================================================================================== 00:14:16.748 [2024-11-20T22:34:17.482Z] Total : 10306.00 40.26 0.00 0.00 0.00 0.00 0.00 00:14:16.748 00:14:17.681 [2024-11-20T22:34:18.415Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:17.681 Nvme0n1 : 4.00 10252.25 40.05 0.00 0.00 0.00 0.00 0.00 00:14:17.681 [2024-11-20T22:34:18.415Z] =================================================================================================================== 00:14:17.681 [2024-11-20T22:34:18.415Z] Total : 10252.25 40.05 0.00 0.00 0.00 0.00 0.00 00:14:17.681 00:14:18.628 [2024-11-20T22:34:19.362Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:18.628 Nvme0n1 : 5.00 10212.80 39.89 0.00 0.00 0.00 0.00 0.00 00:14:18.628 [2024-11-20T22:34:19.362Z] =================================================================================================================== 00:14:18.628 [2024-11-20T22:34:19.362Z] Total : 10212.80 39.89 0.00 0.00 0.00 0.00 0.00 00:14:18.628 00:14:20.005 [2024-11-20T22:34:20.739Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:20.005 Nvme0n1 : 6.00 10149.67 39.65 0.00 0.00 0.00 0.00 0.00 00:14:20.005 [2024-11-20T22:34:20.739Z] =================================================================================================================== 00:14:20.005 [2024-11-20T22:34:20.739Z] Total : 10149.67 39.65 0.00 0.00 0.00 0.00 0.00 00:14:20.005 00:14:20.952 [2024-11-20T22:34:21.686Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:20.952 Nvme0n1 : 7.00 10115.43 39.51 0.00 0.00 0.00 0.00 0.00 00:14:20.952 [2024-11-20T22:34:21.686Z] =================================================================================================================== 00:14:20.952 [2024-11-20T22:34:21.686Z] Total : 10115.43 39.51 0.00 0.00 0.00 0.00 0.00 00:14:20.952 00:14:21.918 [2024-11-20T22:34:22.652Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.918 Nvme0n1 : 8.00 10078.25 39.37 0.00 0.00 0.00 0.00 0.00 00:14:21.918 [2024-11-20T22:34:22.652Z] =================================================================================================================== 00:14:21.918 [2024-11-20T22:34:22.652Z] Total : 10078.25 39.37 0.00 0.00 0.00 0.00 0.00 00:14:21.918 00:14:22.855 [2024-11-20T22:34:23.589Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:22.855 Nvme0n1 : 9.00 9889.00 38.63 0.00 0.00 0.00 0.00 0.00 00:14:22.855 [2024-11-20T22:34:23.589Z] =================================================================================================================== 00:14:22.855 [2024-11-20T22:34:23.589Z] Total : 9889.00 38.63 0.00 0.00 0.00 0.00 0.00 00:14:22.855 00:14:23.790 [2024-11-20T22:34:24.524Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:23.790 Nvme0n1 : 10.00 9848.30 38.47 0.00 0.00 0.00 0.00 0.00 00:14:23.790 [2024-11-20T22:34:24.524Z] =================================================================================================================== 00:14:23.790 [2024-11-20T22:34:24.524Z] Total : 9848.30 38.47 0.00 0.00 0.00 0.00 0.00 00:14:23.790 00:14:23.790 00:14:23.790 Latency(us) 00:14:23.790 [2024-11-20T22:34:24.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.790 [2024-11-20T22:34:24.524Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:23.790 Nvme0n1 : 10.01 9846.74 38.46 0.00 0.00 12990.72 5987.61 173491.67 00:14:23.790 [2024-11-20T22:34:24.524Z] =================================================================================================================== 00:14:23.790 [2024-11-20T22:34:24.524Z] Total : 9846.74 38.46 0.00 0.00 12990.72 5987.61 173491.67 00:14:23.790 0 00:14:23.790 22:34:24 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83665 00:14:23.790 22:34:24 -- common/autotest_common.sh@936 -- # '[' -z 83665 ']' 00:14:23.790 22:34:24 -- common/autotest_common.sh@940 -- # kill -0 83665 00:14:23.790 22:34:24 -- common/autotest_common.sh@941 -- # uname 00:14:23.790 22:34:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:23.790 22:34:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83665 00:14:23.790 22:34:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:23.790 22:34:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:23.790 killing process with pid 83665 00:14:23.790 22:34:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83665' 00:14:23.790 Received shutdown signal, test time was about 10.000000 seconds 00:14:23.790 00:14:23.790 Latency(us) 00:14:23.790 [2024-11-20T22:34:24.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.790 [2024-11-20T22:34:24.524Z] =================================================================================================================== 00:14:23.790 [2024-11-20T22:34:24.524Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:23.790 22:34:24 -- common/autotest_common.sh@955 -- # kill 83665 00:14:23.790 22:34:24 -- common/autotest_common.sh@960 -- # wait 83665 00:14:24.048 22:34:24 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:24.307 22:34:24 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4ae6ec6-59dd-423e-96e1-4518ad52a7ee 00:14:24.307 22:34:24 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:24.565 22:34:25 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:24.565 22:34:25 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:24.565 22:34:25 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:24.823 [2024-11-20 22:34:25.319821] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:24.823 22:34:25 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4ae6ec6-59dd-423e-96e1-4518ad52a7ee 00:14:24.823 22:34:25 -- common/autotest_common.sh@650 -- # local es=0 00:14:24.823 22:34:25 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4ae6ec6-59dd-423e-96e1-4518ad52a7ee 00:14:24.823 22:34:25 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:24.823 22:34:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:24.823 22:34:25 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:24.823 22:34:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:24.823 22:34:25 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:24.823 22:34:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:24.823 22:34:25 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:24.823 22:34:25 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:24.823 22:34:25 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4ae6ec6-59dd-423e-96e1-4518ad52a7ee 00:14:25.082 2024/11/20 22:34:25 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:c4ae6ec6-59dd-423e-96e1-4518ad52a7ee], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:25.082 request: 00:14:25.082 { 00:14:25.082 "method": "bdev_lvol_get_lvstores", 00:14:25.082 "params": { 00:14:25.082 "uuid": "c4ae6ec6-59dd-423e-96e1-4518ad52a7ee" 00:14:25.082 } 00:14:25.082 } 00:14:25.082 Got JSON-RPC error response 00:14:25.082 GoRPCClient: error on JSON-RPC call 00:14:25.082 22:34:25 -- common/autotest_common.sh@653 -- # es=1 00:14:25.082 22:34:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:25.082 22:34:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:25.082 22:34:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:25.082 22:34:25 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:25.341 aio_bdev 00:14:25.341 22:34:25 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 16660088-a560-4e08-978a-a007baf74487 00:14:25.341 22:34:25 -- common/autotest_common.sh@897 -- # local bdev_name=16660088-a560-4e08-978a-a007baf74487 00:14:25.341 22:34:25 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:25.341 22:34:25 -- common/autotest_common.sh@899 -- # local i 00:14:25.341 22:34:25 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:25.341 22:34:25 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:25.341 22:34:25 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:25.341 22:34:26 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 16660088-a560-4e08-978a-a007baf74487 -t 2000 00:14:25.600 [ 00:14:25.600 { 00:14:25.600 "aliases": [ 00:14:25.600 "lvs/lvol" 00:14:25.600 ], 00:14:25.600 "assigned_rate_limits": { 00:14:25.600 "r_mbytes_per_sec": 0, 00:14:25.600 "rw_ios_per_sec": 0, 00:14:25.600 "rw_mbytes_per_sec": 0, 00:14:25.600 "w_mbytes_per_sec": 0 00:14:25.600 }, 00:14:25.600 "block_size": 4096, 00:14:25.600 "claimed": false, 00:14:25.600 "driver_specific": { 00:14:25.600 "lvol": { 00:14:25.600 "base_bdev": "aio_bdev", 00:14:25.600 "clone": false, 00:14:25.600 "esnap_clone": false, 00:14:25.600 "lvol_store_uuid": "c4ae6ec6-59dd-423e-96e1-4518ad52a7ee", 00:14:25.600 "snapshot": false, 00:14:25.600 "thin_provision": false 00:14:25.600 } 00:14:25.600 }, 00:14:25.600 "name": "16660088-a560-4e08-978a-a007baf74487", 00:14:25.600 "num_blocks": 38912, 00:14:25.600 "product_name": "Logical Volume", 00:14:25.600 "supported_io_types": { 00:14:25.600 "abort": false, 00:14:25.600 "compare": false, 00:14:25.600 "compare_and_write": false, 00:14:25.600 "flush": false, 00:14:25.600 "nvme_admin": false, 00:14:25.600 "nvme_io": false, 00:14:25.600 "read": true, 00:14:25.600 "reset": true, 00:14:25.600 "unmap": true, 00:14:25.600 "write": true, 00:14:25.600 "write_zeroes": true 00:14:25.600 }, 00:14:25.600 "uuid": "16660088-a560-4e08-978a-a007baf74487", 00:14:25.600 "zoned": false 00:14:25.600 } 00:14:25.600 ] 00:14:25.600 22:34:26 -- common/autotest_common.sh@905 -- # return 0 00:14:25.600 22:34:26 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4ae6ec6-59dd-423e-96e1-4518ad52a7ee 00:14:25.600 22:34:26 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:25.859 22:34:26 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:25.859 22:34:26 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4ae6ec6-59dd-423e-96e1-4518ad52a7ee 00:14:25.859 22:34:26 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:26.118 22:34:26 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:26.118 22:34:26 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 16660088-a560-4e08-978a-a007baf74487 00:14:26.377 22:34:26 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c4ae6ec6-59dd-423e-96e1-4518ad52a7ee 00:14:26.635 22:34:27 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:26.902 22:34:27 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:27.161 ************************************ 00:14:27.161 END TEST lvs_grow_clean 00:14:27.161 ************************************ 00:14:27.161 00:14:27.161 real 0m17.461s 00:14:27.161 user 0m16.629s 00:14:27.161 sys 0m2.237s 00:14:27.161 22:34:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:27.161 22:34:27 -- common/autotest_common.sh@10 -- # set +x 00:14:27.161 22:34:27 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:27.161 22:34:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:27.161 22:34:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:27.161 22:34:27 -- common/autotest_common.sh@10 -- # set +x 00:14:27.161 ************************************ 00:14:27.161 START TEST lvs_grow_dirty 00:14:27.161 ************************************ 00:14:27.161 22:34:27 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:14:27.161 22:34:27 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:27.161 22:34:27 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:27.161 22:34:27 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:27.161 22:34:27 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:27.161 22:34:27 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:27.161 22:34:27 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:27.161 22:34:27 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:27.161 22:34:27 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:27.161 22:34:27 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:27.420 22:34:28 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:27.420 22:34:28 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:27.678 22:34:28 -- target/nvmf_lvs_grow.sh@28 -- # lvs=a3144f74-dd20-44c5-9985-71654497fe6e 00:14:27.678 22:34:28 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:27.678 22:34:28 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3144f74-dd20-44c5-9985-71654497fe6e 00:14:28.246 22:34:28 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:28.246 22:34:28 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:28.246 22:34:28 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a3144f74-dd20-44c5-9985-71654497fe6e lvol 150 00:14:28.246 22:34:28 -- target/nvmf_lvs_grow.sh@33 -- # lvol=e8e7b107-4806-4b6f-b499-5abb60b37c43 00:14:28.246 22:34:28 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:28.246 22:34:28 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:28.504 [2024-11-20 22:34:29.067833] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:28.504 [2024-11-20 22:34:29.068292] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:28.504 true 00:14:28.504 22:34:29 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3144f74-dd20-44c5-9985-71654497fe6e 00:14:28.504 22:34:29 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:28.761 22:34:29 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:28.761 22:34:29 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:29.019 22:34:29 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e8e7b107-4806-4b6f-b499-5abb60b37c43 00:14:29.278 22:34:29 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:29.537 22:34:30 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:29.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:29.537 22:34:30 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:29.537 22:34:30 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=84092 00:14:29.537 22:34:30 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:29.537 22:34:30 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 84092 /var/tmp/bdevperf.sock 00:14:29.537 22:34:30 -- common/autotest_common.sh@829 -- # '[' -z 84092 ']' 00:14:29.537 22:34:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:29.537 22:34:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:29.537 22:34:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:29.537 22:34:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:29.537 22:34:30 -- common/autotest_common.sh@10 -- # set +x 00:14:29.796 [2024-11-20 22:34:30.271847] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:29.796 [2024-11-20 22:34:30.271932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84092 ] 00:14:29.796 [2024-11-20 22:34:30.403646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.796 [2024-11-20 22:34:30.467936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.732 22:34:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:30.732 22:34:31 -- common/autotest_common.sh@862 -- # return 0 00:14:30.732 22:34:31 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:30.990 Nvme0n1 00:14:30.991 22:34:31 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:31.250 [ 00:14:31.250 { 00:14:31.250 "aliases": [ 00:14:31.250 "e8e7b107-4806-4b6f-b499-5abb60b37c43" 00:14:31.250 ], 00:14:31.250 "assigned_rate_limits": { 00:14:31.250 "r_mbytes_per_sec": 0, 00:14:31.250 "rw_ios_per_sec": 0, 00:14:31.250 "rw_mbytes_per_sec": 0, 00:14:31.250 "w_mbytes_per_sec": 0 00:14:31.250 }, 00:14:31.250 "block_size": 4096, 00:14:31.250 "claimed": false, 00:14:31.250 "driver_specific": { 00:14:31.250 "mp_policy": "active_passive", 00:14:31.250 "nvme": [ 00:14:31.250 { 00:14:31.250 "ctrlr_data": { 00:14:31.250 "ana_reporting": false, 00:14:31.250 "cntlid": 1, 00:14:31.250 "firmware_revision": "24.01.1", 00:14:31.250 "model_number": "SPDK bdev Controller", 00:14:31.250 "multi_ctrlr": true, 00:14:31.250 "oacs": { 00:14:31.250 "firmware": 0, 00:14:31.250 "format": 0, 00:14:31.250 "ns_manage": 0, 00:14:31.250 "security": 0 00:14:31.250 }, 00:14:31.250 "serial_number": "SPDK0", 00:14:31.250 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:31.250 "vendor_id": "0x8086" 00:14:31.250 }, 00:14:31.250 "ns_data": { 00:14:31.250 "can_share": true, 00:14:31.250 "id": 1 00:14:31.250 }, 00:14:31.250 "trid": { 00:14:31.250 "adrfam": "IPv4", 00:14:31.250 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:31.250 "traddr": "10.0.0.2", 00:14:31.250 "trsvcid": "4420", 00:14:31.250 "trtype": "TCP" 00:14:31.250 }, 00:14:31.250 "vs": { 00:14:31.250 "nvme_version": "1.3" 00:14:31.250 } 00:14:31.250 } 00:14:31.250 ] 00:14:31.250 }, 00:14:31.250 "name": "Nvme0n1", 00:14:31.250 "num_blocks": 38912, 00:14:31.250 "product_name": "NVMe disk", 00:14:31.250 "supported_io_types": { 00:14:31.250 "abort": true, 00:14:31.250 "compare": true, 00:14:31.250 "compare_and_write": true, 00:14:31.250 "flush": true, 00:14:31.250 "nvme_admin": true, 00:14:31.250 "nvme_io": true, 00:14:31.250 "read": true, 00:14:31.250 "reset": true, 00:14:31.250 "unmap": true, 00:14:31.250 "write": true, 00:14:31.250 "write_zeroes": true 00:14:31.250 }, 00:14:31.250 "uuid": "e8e7b107-4806-4b6f-b499-5abb60b37c43", 00:14:31.250 "zoned": false 00:14:31.250 } 00:14:31.250 ] 00:14:31.250 22:34:31 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=84135 00:14:31.250 22:34:31 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:31.250 22:34:31 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:31.250 Running I/O for 10 seconds... 00:14:32.627 Latency(us) 00:14:32.627 [2024-11-20T22:34:33.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.627 [2024-11-20T22:34:33.361Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.627 Nvme0n1 : 1.00 10339.00 40.39 0.00 0.00 0.00 0.00 0.00 00:14:32.627 [2024-11-20T22:34:33.361Z] =================================================================================================================== 00:14:32.627 [2024-11-20T22:34:33.361Z] Total : 10339.00 40.39 0.00 0.00 0.00 0.00 0.00 00:14:32.627 00:14:33.195 22:34:33 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a3144f74-dd20-44c5-9985-71654497fe6e 00:14:33.453 [2024-11-20T22:34:34.187Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.453 Nvme0n1 : 2.00 10236.50 39.99 0.00 0.00 0.00 0.00 0.00 00:14:33.453 [2024-11-20T22:34:34.187Z] =================================================================================================================== 00:14:33.453 [2024-11-20T22:34:34.187Z] Total : 10236.50 39.99 0.00 0.00 0.00 0.00 0.00 00:14:33.453 00:14:33.453 true 00:14:33.453 22:34:34 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3144f74-dd20-44c5-9985-71654497fe6e 00:14:33.453 22:34:34 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:33.712 22:34:34 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:33.712 22:34:34 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:33.712 22:34:34 -- target/nvmf_lvs_grow.sh@65 -- # wait 84135 00:14:34.280 [2024-11-20T22:34:35.014Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.280 Nvme0n1 : 3.00 10254.67 40.06 0.00 0.00 0.00 0.00 0.00 00:14:34.280 [2024-11-20T22:34:35.014Z] =================================================================================================================== 00:14:34.280 [2024-11-20T22:34:35.014Z] Total : 10254.67 40.06 0.00 0.00 0.00 0.00 0.00 00:14:34.280 00:14:35.216 [2024-11-20T22:34:35.950Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.216 Nvme0n1 : 4.00 10263.50 40.09 0.00 0.00 0.00 0.00 0.00 00:14:35.216 [2024-11-20T22:34:35.950Z] =================================================================================================================== 00:14:35.216 [2024-11-20T22:34:35.950Z] Total : 10263.50 40.09 0.00 0.00 0.00 0.00 0.00 00:14:35.216 00:14:36.593 [2024-11-20T22:34:37.327Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.593 Nvme0n1 : 5.00 9966.40 38.93 0.00 0.00 0.00 0.00 0.00 00:14:36.593 [2024-11-20T22:34:37.327Z] =================================================================================================================== 00:14:36.593 [2024-11-20T22:34:37.327Z] Total : 9966.40 38.93 0.00 0.00 0.00 0.00 0.00 00:14:36.593 00:14:37.529 [2024-11-20T22:34:38.263Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.529 Nvme0n1 : 6.00 9984.50 39.00 0.00 0.00 0.00 0.00 0.00 00:14:37.529 [2024-11-20T22:34:38.263Z] =================================================================================================================== 00:14:37.529 [2024-11-20T22:34:38.263Z] Total : 9984.50 39.00 0.00 0.00 0.00 0.00 0.00 00:14:37.529 00:14:38.465 [2024-11-20T22:34:39.199Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.465 Nvme0n1 : 7.00 9954.00 38.88 0.00 0.00 0.00 0.00 0.00 00:14:38.465 [2024-11-20T22:34:39.199Z] =================================================================================================================== 00:14:38.465 [2024-11-20T22:34:39.199Z] Total : 9954.00 38.88 0.00 0.00 0.00 0.00 0.00 00:14:38.465 00:14:39.398 [2024-11-20T22:34:40.132Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.398 Nvme0n1 : 8.00 9889.38 38.63 0.00 0.00 0.00 0.00 0.00 00:14:39.398 [2024-11-20T22:34:40.132Z] =================================================================================================================== 00:14:39.398 [2024-11-20T22:34:40.133Z] Total : 9889.38 38.63 0.00 0.00 0.00 0.00 0.00 00:14:39.399 00:14:40.332 [2024-11-20T22:34:41.066Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.332 Nvme0n1 : 9.00 9755.00 38.11 0.00 0.00 0.00 0.00 0.00 00:14:40.332 [2024-11-20T22:34:41.066Z] =================================================================================================================== 00:14:40.332 [2024-11-20T22:34:41.066Z] Total : 9755.00 38.11 0.00 0.00 0.00 0.00 0.00 00:14:40.332 00:14:41.268 [2024-11-20T22:34:42.002Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.268 Nvme0n1 : 10.00 9660.10 37.73 0.00 0.00 0.00 0.00 0.00 00:14:41.268 [2024-11-20T22:34:42.002Z] =================================================================================================================== 00:14:41.268 [2024-11-20T22:34:42.002Z] Total : 9660.10 37.73 0.00 0.00 0.00 0.00 0.00 00:14:41.268 00:14:41.268 00:14:41.268 Latency(us) 00:14:41.268 [2024-11-20T22:34:42.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.268 [2024-11-20T22:34:42.002Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.268 Nvme0n1 : 10.00 9664.21 37.75 0.00 0.00 13239.82 4200.26 160146.15 00:14:41.268 [2024-11-20T22:34:42.002Z] =================================================================================================================== 00:14:41.268 [2024-11-20T22:34:42.002Z] Total : 9664.21 37.75 0.00 0.00 13239.82 4200.26 160146.15 00:14:41.268 0 00:14:41.268 22:34:41 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 84092 00:14:41.268 22:34:41 -- common/autotest_common.sh@936 -- # '[' -z 84092 ']' 00:14:41.268 22:34:41 -- common/autotest_common.sh@940 -- # kill -0 84092 00:14:41.268 22:34:41 -- common/autotest_common.sh@941 -- # uname 00:14:41.268 22:34:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:41.268 22:34:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84092 00:14:41.527 22:34:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:41.527 killing process with pid 84092 00:14:41.527 22:34:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:41.527 22:34:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84092' 00:14:41.527 Received shutdown signal, test time was about 10.000000 seconds 00:14:41.527 00:14:41.527 Latency(us) 00:14:41.527 [2024-11-20T22:34:42.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.527 [2024-11-20T22:34:42.261Z] =================================================================================================================== 00:14:41.527 [2024-11-20T22:34:42.261Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:41.527 22:34:42 -- common/autotest_common.sh@955 -- # kill 84092 00:14:41.527 22:34:42 -- common/autotest_common.sh@960 -- # wait 84092 00:14:41.527 22:34:42 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:41.786 22:34:42 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3144f74-dd20-44c5-9985-71654497fe6e 00:14:41.786 22:34:42 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:42.045 22:34:42 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:42.045 22:34:42 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:42.045 22:34:42 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 83503 00:14:42.045 22:34:42 -- target/nvmf_lvs_grow.sh@74 -- # wait 83503 00:14:42.045 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 83503 Killed "${NVMF_APP[@]}" "$@" 00:14:42.045 22:34:42 -- target/nvmf_lvs_grow.sh@74 -- # true 00:14:42.045 22:34:42 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:14:42.045 22:34:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:42.045 22:34:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:42.045 22:34:42 -- common/autotest_common.sh@10 -- # set +x 00:14:42.045 22:34:42 -- nvmf/common.sh@469 -- # nvmfpid=84294 00:14:42.045 22:34:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:42.045 22:34:42 -- nvmf/common.sh@470 -- # waitforlisten 84294 00:14:42.045 22:34:42 -- common/autotest_common.sh@829 -- # '[' -z 84294 ']' 00:14:42.045 22:34:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.045 22:34:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:42.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.045 22:34:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.045 22:34:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:42.045 22:34:42 -- common/autotest_common.sh@10 -- # set +x 00:14:42.304 [2024-11-20 22:34:42.808845] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:42.304 [2024-11-20 22:34:42.808947] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.304 [2024-11-20 22:34:42.943587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.304 [2024-11-20 22:34:43.013052] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:42.304 [2024-11-20 22:34:43.013212] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.304 [2024-11-20 22:34:43.013224] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.304 [2024-11-20 22:34:43.013233] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.304 [2024-11-20 22:34:43.013264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.240 22:34:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:43.240 22:34:43 -- common/autotest_common.sh@862 -- # return 0 00:14:43.240 22:34:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:43.240 22:34:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:43.240 22:34:43 -- common/autotest_common.sh@10 -- # set +x 00:14:43.240 22:34:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.240 22:34:43 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:43.499 [2024-11-20 22:34:44.098351] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:43.499 [2024-11-20 22:34:44.098735] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:43.499 [2024-11-20 22:34:44.098943] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:43.499 22:34:44 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:14:43.499 22:34:44 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev e8e7b107-4806-4b6f-b499-5abb60b37c43 00:14:43.499 22:34:44 -- common/autotest_common.sh@897 -- # local bdev_name=e8e7b107-4806-4b6f-b499-5abb60b37c43 00:14:43.499 22:34:44 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:43.499 22:34:44 -- common/autotest_common.sh@899 -- # local i 00:14:43.499 22:34:44 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:43.499 22:34:44 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:43.499 22:34:44 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:43.757 22:34:44 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e8e7b107-4806-4b6f-b499-5abb60b37c43 -t 2000 00:14:44.016 [ 00:14:44.016 { 00:14:44.016 "aliases": [ 00:14:44.016 "lvs/lvol" 00:14:44.016 ], 00:14:44.016 "assigned_rate_limits": { 00:14:44.016 "r_mbytes_per_sec": 0, 00:14:44.016 "rw_ios_per_sec": 0, 00:14:44.016 "rw_mbytes_per_sec": 0, 00:14:44.016 "w_mbytes_per_sec": 0 00:14:44.016 }, 00:14:44.016 "block_size": 4096, 00:14:44.016 "claimed": false, 00:14:44.016 "driver_specific": { 00:14:44.016 "lvol": { 00:14:44.016 "base_bdev": "aio_bdev", 00:14:44.016 "clone": false, 00:14:44.016 "esnap_clone": false, 00:14:44.016 "lvol_store_uuid": "a3144f74-dd20-44c5-9985-71654497fe6e", 00:14:44.016 "snapshot": false, 00:14:44.016 "thin_provision": false 00:14:44.016 } 00:14:44.016 }, 00:14:44.016 "name": "e8e7b107-4806-4b6f-b499-5abb60b37c43", 00:14:44.016 "num_blocks": 38912, 00:14:44.016 "product_name": "Logical Volume", 00:14:44.016 "supported_io_types": { 00:14:44.016 "abort": false, 00:14:44.016 "compare": false, 00:14:44.016 "compare_and_write": false, 00:14:44.016 "flush": false, 00:14:44.016 "nvme_admin": false, 00:14:44.016 "nvme_io": false, 00:14:44.016 "read": true, 00:14:44.016 "reset": true, 00:14:44.016 "unmap": true, 00:14:44.016 "write": true, 00:14:44.016 "write_zeroes": true 00:14:44.016 }, 00:14:44.016 "uuid": "e8e7b107-4806-4b6f-b499-5abb60b37c43", 00:14:44.016 "zoned": false 00:14:44.016 } 00:14:44.016 ] 00:14:44.016 22:34:44 -- common/autotest_common.sh@905 -- # return 0 00:14:44.016 22:34:44 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3144f74-dd20-44c5-9985-71654497fe6e 00:14:44.016 22:34:44 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:14:44.275 22:34:44 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:14:44.275 22:34:44 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3144f74-dd20-44c5-9985-71654497fe6e 00:14:44.275 22:34:44 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:14:44.534 22:34:45 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:14:44.534 22:34:45 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:44.793 [2024-11-20 22:34:45.303640] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:44.793 22:34:45 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3144f74-dd20-44c5-9985-71654497fe6e 00:14:44.793 22:34:45 -- common/autotest_common.sh@650 -- # local es=0 00:14:44.793 22:34:45 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3144f74-dd20-44c5-9985-71654497fe6e 00:14:44.793 22:34:45 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.793 22:34:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:44.793 22:34:45 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.793 22:34:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:44.793 22:34:45 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.793 22:34:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:44.793 22:34:45 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.793 22:34:45 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:44.793 22:34:45 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3144f74-dd20-44c5-9985-71654497fe6e 00:14:45.052 2024/11/20 22:34:45 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:a3144f74-dd20-44c5-9985-71654497fe6e], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:45.052 request: 00:14:45.052 { 00:14:45.052 "method": "bdev_lvol_get_lvstores", 00:14:45.052 "params": { 00:14:45.052 "uuid": "a3144f74-dd20-44c5-9985-71654497fe6e" 00:14:45.052 } 00:14:45.052 } 00:14:45.052 Got JSON-RPC error response 00:14:45.052 GoRPCClient: error on JSON-RPC call 00:14:45.052 22:34:45 -- common/autotest_common.sh@653 -- # es=1 00:14:45.052 22:34:45 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:45.052 22:34:45 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:45.052 22:34:45 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:45.052 22:34:45 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:45.311 aio_bdev 00:14:45.311 22:34:45 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev e8e7b107-4806-4b6f-b499-5abb60b37c43 00:14:45.311 22:34:45 -- common/autotest_common.sh@897 -- # local bdev_name=e8e7b107-4806-4b6f-b499-5abb60b37c43 00:14:45.311 22:34:45 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:45.311 22:34:45 -- common/autotest_common.sh@899 -- # local i 00:14:45.311 22:34:45 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:45.311 22:34:45 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:45.311 22:34:45 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:45.311 22:34:46 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e8e7b107-4806-4b6f-b499-5abb60b37c43 -t 2000 00:14:45.569 [ 00:14:45.569 { 00:14:45.569 "aliases": [ 00:14:45.569 "lvs/lvol" 00:14:45.569 ], 00:14:45.570 "assigned_rate_limits": { 00:14:45.570 "r_mbytes_per_sec": 0, 00:14:45.570 "rw_ios_per_sec": 0, 00:14:45.570 "rw_mbytes_per_sec": 0, 00:14:45.570 "w_mbytes_per_sec": 0 00:14:45.570 }, 00:14:45.570 "block_size": 4096, 00:14:45.570 "claimed": false, 00:14:45.570 "driver_specific": { 00:14:45.570 "lvol": { 00:14:45.570 "base_bdev": "aio_bdev", 00:14:45.570 "clone": false, 00:14:45.570 "esnap_clone": false, 00:14:45.570 "lvol_store_uuid": "a3144f74-dd20-44c5-9985-71654497fe6e", 00:14:45.570 "snapshot": false, 00:14:45.570 "thin_provision": false 00:14:45.570 } 00:14:45.570 }, 00:14:45.570 "name": "e8e7b107-4806-4b6f-b499-5abb60b37c43", 00:14:45.570 "num_blocks": 38912, 00:14:45.570 "product_name": "Logical Volume", 00:14:45.570 "supported_io_types": { 00:14:45.570 "abort": false, 00:14:45.570 "compare": false, 00:14:45.570 "compare_and_write": false, 00:14:45.570 "flush": false, 00:14:45.570 "nvme_admin": false, 00:14:45.570 "nvme_io": false, 00:14:45.570 "read": true, 00:14:45.570 "reset": true, 00:14:45.570 "unmap": true, 00:14:45.570 "write": true, 00:14:45.570 "write_zeroes": true 00:14:45.570 }, 00:14:45.570 "uuid": "e8e7b107-4806-4b6f-b499-5abb60b37c43", 00:14:45.570 "zoned": false 00:14:45.570 } 00:14:45.570 ] 00:14:45.828 22:34:46 -- common/autotest_common.sh@905 -- # return 0 00:14:45.828 22:34:46 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3144f74-dd20-44c5-9985-71654497fe6e 00:14:45.828 22:34:46 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:46.087 22:34:46 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:46.087 22:34:46 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a3144f74-dd20-44c5-9985-71654497fe6e 00:14:46.087 22:34:46 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:46.345 22:34:46 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:46.345 22:34:46 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e8e7b107-4806-4b6f-b499-5abb60b37c43 00:14:46.345 22:34:47 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a3144f74-dd20-44c5-9985-71654497fe6e 00:14:46.604 22:34:47 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:46.862 22:34:47 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:47.121 00:14:47.121 real 0m19.956s 00:14:47.121 user 0m40.719s 00:14:47.121 sys 0m7.961s 00:14:47.121 22:34:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:47.121 22:34:47 -- common/autotest_common.sh@10 -- # set +x 00:14:47.121 ************************************ 00:14:47.121 END TEST lvs_grow_dirty 00:14:47.121 ************************************ 00:14:47.380 22:34:47 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:47.380 22:34:47 -- common/autotest_common.sh@806 -- # type=--id 00:14:47.380 22:34:47 -- common/autotest_common.sh@807 -- # id=0 00:14:47.380 22:34:47 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:47.380 22:34:47 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:47.380 22:34:47 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:47.380 22:34:47 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:47.380 22:34:47 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:47.380 22:34:47 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:47.380 nvmf_trace.0 00:14:47.380 22:34:47 -- common/autotest_common.sh@821 -- # return 0 00:14:47.380 22:34:47 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:47.380 22:34:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:47.380 22:34:47 -- nvmf/common.sh@116 -- # sync 00:14:47.947 22:34:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:47.947 22:34:48 -- nvmf/common.sh@119 -- # set +e 00:14:47.947 22:34:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:47.947 22:34:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:47.947 rmmod nvme_tcp 00:14:47.947 rmmod nvme_fabrics 00:14:47.947 rmmod nvme_keyring 00:14:47.947 22:34:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:47.947 22:34:48 -- nvmf/common.sh@123 -- # set -e 00:14:47.947 22:34:48 -- nvmf/common.sh@124 -- # return 0 00:14:47.947 22:34:48 -- nvmf/common.sh@477 -- # '[' -n 84294 ']' 00:14:47.947 22:34:48 -- nvmf/common.sh@478 -- # killprocess 84294 00:14:47.947 22:34:48 -- common/autotest_common.sh@936 -- # '[' -z 84294 ']' 00:14:47.947 22:34:48 -- common/autotest_common.sh@940 -- # kill -0 84294 00:14:47.947 22:34:48 -- common/autotest_common.sh@941 -- # uname 00:14:47.947 22:34:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:47.947 22:34:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84294 00:14:47.947 22:34:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:47.947 22:34:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:47.947 killing process with pid 84294 00:14:47.947 22:34:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84294' 00:14:47.947 22:34:48 -- common/autotest_common.sh@955 -- # kill 84294 00:14:47.947 22:34:48 -- common/autotest_common.sh@960 -- # wait 84294 00:14:48.515 22:34:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:48.515 22:34:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:48.515 22:34:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:48.515 22:34:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:48.515 22:34:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:48.515 22:34:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.515 22:34:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:48.515 22:34:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.515 22:34:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:48.515 00:14:48.515 real 0m40.581s 00:14:48.515 user 1m4.172s 00:14:48.515 sys 0m11.508s 00:14:48.515 22:34:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:48.515 ************************************ 00:14:48.515 END TEST nvmf_lvs_grow 00:14:48.515 22:34:48 -- common/autotest_common.sh@10 -- # set +x 00:14:48.515 ************************************ 00:14:48.515 22:34:49 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:48.515 22:34:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:48.515 22:34:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:48.515 22:34:49 -- common/autotest_common.sh@10 -- # set +x 00:14:48.515 ************************************ 00:14:48.515 START TEST nvmf_bdev_io_wait 00:14:48.515 ************************************ 00:14:48.515 22:34:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:48.515 * Looking for test storage... 00:14:48.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:48.515 22:34:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:48.515 22:34:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:48.515 22:34:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:48.515 22:34:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:48.515 22:34:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:48.515 22:34:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:48.515 22:34:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:48.515 22:34:49 -- scripts/common.sh@335 -- # IFS=.-: 00:14:48.515 22:34:49 -- scripts/common.sh@335 -- # read -ra ver1 00:14:48.515 22:34:49 -- scripts/common.sh@336 -- # IFS=.-: 00:14:48.515 22:34:49 -- scripts/common.sh@336 -- # read -ra ver2 00:14:48.515 22:34:49 -- scripts/common.sh@337 -- # local 'op=<' 00:14:48.515 22:34:49 -- scripts/common.sh@339 -- # ver1_l=2 00:14:48.515 22:34:49 -- scripts/common.sh@340 -- # ver2_l=1 00:14:48.515 22:34:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:48.515 22:34:49 -- scripts/common.sh@343 -- # case "$op" in 00:14:48.515 22:34:49 -- scripts/common.sh@344 -- # : 1 00:14:48.515 22:34:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:48.515 22:34:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:48.515 22:34:49 -- scripts/common.sh@364 -- # decimal 1 00:14:48.515 22:34:49 -- scripts/common.sh@352 -- # local d=1 00:14:48.515 22:34:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:48.515 22:34:49 -- scripts/common.sh@354 -- # echo 1 00:14:48.515 22:34:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:48.515 22:34:49 -- scripts/common.sh@365 -- # decimal 2 00:14:48.515 22:34:49 -- scripts/common.sh@352 -- # local d=2 00:14:48.515 22:34:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:48.515 22:34:49 -- scripts/common.sh@354 -- # echo 2 00:14:48.515 22:34:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:48.515 22:34:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:48.515 22:34:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:48.515 22:34:49 -- scripts/common.sh@367 -- # return 0 00:14:48.515 22:34:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:48.515 22:34:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:48.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.515 --rc genhtml_branch_coverage=1 00:14:48.515 --rc genhtml_function_coverage=1 00:14:48.515 --rc genhtml_legend=1 00:14:48.515 --rc geninfo_all_blocks=1 00:14:48.515 --rc geninfo_unexecuted_blocks=1 00:14:48.515 00:14:48.515 ' 00:14:48.515 22:34:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:48.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.515 --rc genhtml_branch_coverage=1 00:14:48.516 --rc genhtml_function_coverage=1 00:14:48.516 --rc genhtml_legend=1 00:14:48.516 --rc geninfo_all_blocks=1 00:14:48.516 --rc geninfo_unexecuted_blocks=1 00:14:48.516 00:14:48.516 ' 00:14:48.516 22:34:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:48.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.516 --rc genhtml_branch_coverage=1 00:14:48.516 --rc genhtml_function_coverage=1 00:14:48.516 --rc genhtml_legend=1 00:14:48.516 --rc geninfo_all_blocks=1 00:14:48.516 --rc geninfo_unexecuted_blocks=1 00:14:48.516 00:14:48.516 ' 00:14:48.516 22:34:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:48.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.516 --rc genhtml_branch_coverage=1 00:14:48.516 --rc genhtml_function_coverage=1 00:14:48.516 --rc genhtml_legend=1 00:14:48.516 --rc geninfo_all_blocks=1 00:14:48.516 --rc geninfo_unexecuted_blocks=1 00:14:48.516 00:14:48.516 ' 00:14:48.516 22:34:49 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:48.516 22:34:49 -- nvmf/common.sh@7 -- # uname -s 00:14:48.516 22:34:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.516 22:34:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.516 22:34:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.516 22:34:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.516 22:34:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.516 22:34:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.516 22:34:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.516 22:34:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.516 22:34:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.516 22:34:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.516 22:34:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:14:48.516 22:34:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:14:48.516 22:34:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.516 22:34:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.516 22:34:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:48.516 22:34:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:48.516 22:34:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.516 22:34:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.516 22:34:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.516 22:34:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.516 22:34:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.516 22:34:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.516 22:34:49 -- paths/export.sh@5 -- # export PATH 00:14:48.516 22:34:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.516 22:34:49 -- nvmf/common.sh@46 -- # : 0 00:14:48.516 22:34:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:48.516 22:34:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:48.516 22:34:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:48.516 22:34:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.516 22:34:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.516 22:34:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:48.516 22:34:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:48.516 22:34:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:48.781 22:34:49 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:48.781 22:34:49 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:48.781 22:34:49 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:48.781 22:34:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:48.781 22:34:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:48.781 22:34:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:48.781 22:34:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:48.781 22:34:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:48.781 22:34:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.781 22:34:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:48.781 22:34:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.781 22:34:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:48.781 22:34:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:48.781 22:34:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:48.781 22:34:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:48.781 22:34:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:48.781 22:34:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:48.781 22:34:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:48.781 22:34:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:48.781 22:34:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:48.781 22:34:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:48.781 22:34:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:48.781 22:34:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:48.781 22:34:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:48.781 22:34:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:48.781 22:34:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:48.781 22:34:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:48.781 22:34:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:48.781 22:34:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:48.781 22:34:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:48.781 22:34:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:48.781 Cannot find device "nvmf_tgt_br" 00:14:48.781 22:34:49 -- nvmf/common.sh@154 -- # true 00:14:48.781 22:34:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:48.781 Cannot find device "nvmf_tgt_br2" 00:14:48.781 22:34:49 -- nvmf/common.sh@155 -- # true 00:14:48.781 22:34:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:48.781 22:34:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:48.781 Cannot find device "nvmf_tgt_br" 00:14:48.781 22:34:49 -- nvmf/common.sh@157 -- # true 00:14:48.781 22:34:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:48.781 Cannot find device "nvmf_tgt_br2" 00:14:48.781 22:34:49 -- nvmf/common.sh@158 -- # true 00:14:48.781 22:34:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:48.781 22:34:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:48.781 22:34:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:48.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:48.781 22:34:49 -- nvmf/common.sh@161 -- # true 00:14:48.781 22:34:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:48.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:48.781 22:34:49 -- nvmf/common.sh@162 -- # true 00:14:48.781 22:34:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:48.781 22:34:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:48.781 22:34:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:48.781 22:34:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:48.781 22:34:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:48.781 22:34:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:48.781 22:34:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:48.781 22:34:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:48.781 22:34:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:48.781 22:34:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:48.782 22:34:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:48.782 22:34:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:48.782 22:34:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:48.782 22:34:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:48.782 22:34:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:48.782 22:34:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:48.782 22:34:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:48.782 22:34:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:48.782 22:34:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:49.067 22:34:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:49.067 22:34:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:49.067 22:34:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:49.067 22:34:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:49.067 22:34:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:49.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:14:49.067 00:14:49.067 --- 10.0.0.2 ping statistics --- 00:14:49.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.067 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:14:49.067 22:34:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:49.067 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:49.067 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:14:49.067 00:14:49.067 --- 10.0.0.3 ping statistics --- 00:14:49.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.067 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:49.067 22:34:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:49.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:14:49.067 00:14:49.067 --- 10.0.0.1 ping statistics --- 00:14:49.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.067 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:49.067 22:34:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.067 22:34:49 -- nvmf/common.sh@421 -- # return 0 00:14:49.067 22:34:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:49.067 22:34:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.067 22:34:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:49.067 22:34:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:49.067 22:34:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.067 22:34:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:49.067 22:34:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:49.067 22:34:49 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:49.067 22:34:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:49.067 22:34:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:49.067 22:34:49 -- common/autotest_common.sh@10 -- # set +x 00:14:49.067 22:34:49 -- nvmf/common.sh@469 -- # nvmfpid=84734 00:14:49.067 22:34:49 -- nvmf/common.sh@470 -- # waitforlisten 84734 00:14:49.067 22:34:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:49.067 22:34:49 -- common/autotest_common.sh@829 -- # '[' -z 84734 ']' 00:14:49.067 22:34:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.067 22:34:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:49.067 22:34:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.067 22:34:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:49.067 22:34:49 -- common/autotest_common.sh@10 -- # set +x 00:14:49.067 [2024-11-20 22:34:49.636436] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:49.067 [2024-11-20 22:34:49.636502] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.067 [2024-11-20 22:34:49.767372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:49.356 [2024-11-20 22:34:49.836447] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:49.356 [2024-11-20 22:34:49.836603] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.356 [2024-11-20 22:34:49.836616] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.356 [2024-11-20 22:34:49.836623] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.356 [2024-11-20 22:34:49.836787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.356 [2024-11-20 22:34:49.837442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:49.356 [2024-11-20 22:34:49.837587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:49.356 [2024-11-20 22:34:49.837594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.356 22:34:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:49.356 22:34:49 -- common/autotest_common.sh@862 -- # return 0 00:14:49.356 22:34:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:49.356 22:34:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:49.356 22:34:49 -- common/autotest_common.sh@10 -- # set +x 00:14:49.356 22:34:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.356 22:34:49 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:49.356 22:34:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.356 22:34:49 -- common/autotest_common.sh@10 -- # set +x 00:14:49.356 22:34:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.356 22:34:49 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:49.356 22:34:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.356 22:34:49 -- common/autotest_common.sh@10 -- # set +x 00:14:49.356 22:34:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.356 22:34:50 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:49.356 22:34:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.356 22:34:50 -- common/autotest_common.sh@10 -- # set +x 00:14:49.356 [2024-11-20 22:34:50.033271] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.356 22:34:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.356 22:34:50 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:49.356 22:34:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.356 22:34:50 -- common/autotest_common.sh@10 -- # set +x 00:14:49.356 Malloc0 00:14:49.356 22:34:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.356 22:34:50 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:49.356 22:34:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.356 22:34:50 -- common/autotest_common.sh@10 -- # set +x 00:14:49.625 22:34:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.625 22:34:50 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:49.625 22:34:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.625 22:34:50 -- common/autotest_common.sh@10 -- # set +x 00:14:49.625 22:34:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.625 22:34:50 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:49.625 22:34:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.625 22:34:50 -- common/autotest_common.sh@10 -- # set +x 00:14:49.625 [2024-11-20 22:34:50.090483] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.625 22:34:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.626 22:34:50 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=84768 00:14:49.626 22:34:50 -- target/bdev_io_wait.sh@30 -- # READ_PID=84770 00:14:49.626 22:34:50 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:49.626 22:34:50 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:49.626 22:34:50 -- nvmf/common.sh@520 -- # config=() 00:14:49.626 22:34:50 -- nvmf/common.sh@520 -- # local subsystem config 00:14:49.626 22:34:50 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=84772 00:14:49.626 22:34:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:49.626 22:34:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:49.626 { 00:14:49.626 "params": { 00:14:49.626 "name": "Nvme$subsystem", 00:14:49.626 "trtype": "$TEST_TRANSPORT", 00:14:49.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:49.626 "adrfam": "ipv4", 00:14:49.626 "trsvcid": "$NVMF_PORT", 00:14:49.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:49.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:49.626 "hdgst": ${hdgst:-false}, 00:14:49.626 "ddgst": ${ddgst:-false} 00:14:49.626 }, 00:14:49.626 "method": "bdev_nvme_attach_controller" 00:14:49.626 } 00:14:49.626 EOF 00:14:49.626 )") 00:14:49.626 22:34:50 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:49.626 22:34:50 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:49.626 22:34:50 -- nvmf/common.sh@520 -- # config=() 00:14:49.626 22:34:50 -- nvmf/common.sh@520 -- # local subsystem config 00:14:49.626 22:34:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:49.626 22:34:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:49.626 { 00:14:49.626 "params": { 00:14:49.626 "name": "Nvme$subsystem", 00:14:49.626 "trtype": "$TEST_TRANSPORT", 00:14:49.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:49.626 "adrfam": "ipv4", 00:14:49.626 "trsvcid": "$NVMF_PORT", 00:14:49.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:49.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:49.626 "hdgst": ${hdgst:-false}, 00:14:49.626 "ddgst": ${ddgst:-false} 00:14:49.626 }, 00:14:49.626 "method": "bdev_nvme_attach_controller" 00:14:49.626 } 00:14:49.626 EOF 00:14:49.626 )") 00:14:49.626 22:34:50 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:49.626 22:34:50 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:49.626 22:34:50 -- nvmf/common.sh@520 -- # config=() 00:14:49.626 22:34:50 -- nvmf/common.sh@542 -- # cat 00:14:49.626 22:34:50 -- nvmf/common.sh@520 -- # local subsystem config 00:14:49.626 22:34:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:49.626 22:34:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:49.626 { 00:14:49.626 "params": { 00:14:49.626 "name": "Nvme$subsystem", 00:14:49.626 "trtype": "$TEST_TRANSPORT", 00:14:49.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:49.626 "adrfam": "ipv4", 00:14:49.626 "trsvcid": "$NVMF_PORT", 00:14:49.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:49.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:49.626 "hdgst": ${hdgst:-false}, 00:14:49.626 "ddgst": ${ddgst:-false} 00:14:49.626 }, 00:14:49.626 "method": "bdev_nvme_attach_controller" 00:14:49.626 } 00:14:49.626 EOF 00:14:49.626 )") 00:14:49.626 22:34:50 -- nvmf/common.sh@542 -- # cat 00:14:49.626 22:34:50 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:49.626 22:34:50 -- nvmf/common.sh@542 -- # cat 00:14:49.626 22:34:50 -- nvmf/common.sh@544 -- # jq . 00:14:49.626 22:34:50 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=84773 00:14:49.626 22:34:50 -- nvmf/common.sh@544 -- # jq . 00:14:49.626 22:34:50 -- target/bdev_io_wait.sh@35 -- # sync 00:14:49.626 22:34:50 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:49.626 22:34:50 -- nvmf/common.sh@520 -- # config=() 00:14:49.626 22:34:50 -- nvmf/common.sh@520 -- # local subsystem config 00:14:49.626 22:34:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:49.626 22:34:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:49.626 { 00:14:49.626 "params": { 00:14:49.626 "name": "Nvme$subsystem", 00:14:49.626 "trtype": "$TEST_TRANSPORT", 00:14:49.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:49.626 "adrfam": "ipv4", 00:14:49.626 "trsvcid": "$NVMF_PORT", 00:14:49.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:49.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:49.626 "hdgst": ${hdgst:-false}, 00:14:49.626 "ddgst": ${ddgst:-false} 00:14:49.626 }, 00:14:49.626 "method": "bdev_nvme_attach_controller" 00:14:49.626 } 00:14:49.626 EOF 00:14:49.626 )") 00:14:49.626 22:34:50 -- nvmf/common.sh@545 -- # IFS=, 00:14:49.626 22:34:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:49.626 "params": { 00:14:49.626 "name": "Nvme1", 00:14:49.626 "trtype": "tcp", 00:14:49.626 "traddr": "10.0.0.2", 00:14:49.626 "adrfam": "ipv4", 00:14:49.626 "trsvcid": "4420", 00:14:49.626 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:49.626 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:49.626 "hdgst": false, 00:14:49.626 "ddgst": false 00:14:49.626 }, 00:14:49.626 "method": "bdev_nvme_attach_controller" 00:14:49.626 }' 00:14:49.626 22:34:50 -- nvmf/common.sh@544 -- # jq . 00:14:49.626 22:34:50 -- nvmf/common.sh@545 -- # IFS=, 00:14:49.626 22:34:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:49.626 "params": { 00:14:49.626 "name": "Nvme1", 00:14:49.626 "trtype": "tcp", 00:14:49.626 "traddr": "10.0.0.2", 00:14:49.626 "adrfam": "ipv4", 00:14:49.626 "trsvcid": "4420", 00:14:49.626 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:49.626 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:49.626 "hdgst": false, 00:14:49.626 "ddgst": false 00:14:49.626 }, 00:14:49.626 "method": "bdev_nvme_attach_controller" 00:14:49.626 }' 00:14:49.626 22:34:50 -- nvmf/common.sh@542 -- # cat 00:14:49.626 22:34:50 -- nvmf/common.sh@545 -- # IFS=, 00:14:49.626 22:34:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:49.626 "params": { 00:14:49.626 "name": "Nvme1", 00:14:49.626 "trtype": "tcp", 00:14:49.626 "traddr": "10.0.0.2", 00:14:49.626 "adrfam": "ipv4", 00:14:49.626 "trsvcid": "4420", 00:14:49.626 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:49.626 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:49.626 "hdgst": false, 00:14:49.626 "ddgst": false 00:14:49.626 }, 00:14:49.626 "method": "bdev_nvme_attach_controller" 00:14:49.626 }' 00:14:49.626 22:34:50 -- nvmf/common.sh@544 -- # jq . 00:14:49.626 22:34:50 -- nvmf/common.sh@545 -- # IFS=, 00:14:49.626 22:34:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:49.626 "params": { 00:14:49.626 "name": "Nvme1", 00:14:49.626 "trtype": "tcp", 00:14:49.626 "traddr": "10.0.0.2", 00:14:49.626 "adrfam": "ipv4", 00:14:49.626 "trsvcid": "4420", 00:14:49.626 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:49.626 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:49.626 "hdgst": false, 00:14:49.626 "ddgst": false 00:14:49.626 }, 00:14:49.626 "method": "bdev_nvme_attach_controller" 00:14:49.626 }' 00:14:49.626 [2024-11-20 22:34:50.168878] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:49.626 [2024-11-20 22:34:50.168961] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:49.626 22:34:50 -- target/bdev_io_wait.sh@37 -- # wait 84768 00:14:49.626 [2024-11-20 22:34:50.170879] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:49.626 [2024-11-20 22:34:50.171088] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:49.626 [2024-11-20 22:34:50.172968] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:49.626 [2024-11-20 22:34:50.173041] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:49.626 [2024-11-20 22:34:50.174555] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:49.626 [2024-11-20 22:34:50.174632] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:49.885 [2024-11-20 22:34:50.376697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.885 [2024-11-20 22:34:50.446114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.885 [2024-11-20 22:34:50.451434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:49.885 [2024-11-20 22:34:50.517024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:14:49.885 [2024-11-20 22:34:50.530994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.885 [2024-11-20 22:34:50.602339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.885 [2024-11-20 22:34:50.606981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:50.143 Running I/O for 1 seconds... 00:14:50.143 Running I/O for 1 seconds... 00:14:50.144 [2024-11-20 22:34:50.677928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:50.144 Running I/O for 1 seconds... 00:14:50.144 Running I/O for 1 seconds... 00:14:51.079 00:14:51.079 Latency(us) 00:14:51.079 [2024-11-20T22:34:51.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.079 [2024-11-20T22:34:51.813Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:51.079 Nvme1n1 : 1.00 212630.72 830.59 0.00 0.00 599.70 247.62 960.70 00:14:51.080 [2024-11-20T22:34:51.814Z] =================================================================================================================== 00:14:51.080 [2024-11-20T22:34:51.814Z] Total : 212630.72 830.59 0.00 0.00 599.70 247.62 960.70 00:14:51.080 00:14:51.080 Latency(us) 00:14:51.080 [2024-11-20T22:34:51.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.080 [2024-11-20T22:34:51.814Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:51.080 Nvme1n1 : 1.03 5338.68 20.85 0.00 0.00 23513.88 8340.95 38368.35 00:14:51.080 [2024-11-20T22:34:51.814Z] =================================================================================================================== 00:14:51.080 [2024-11-20T22:34:51.814Z] Total : 5338.68 20.85 0.00 0.00 23513.88 8340.95 38368.35 00:14:51.080 00:14:51.080 Latency(us) 00:14:51.080 [2024-11-20T22:34:51.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.080 [2024-11-20T22:34:51.814Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:51.080 Nvme1n1 : 1.01 4888.31 19.09 0.00 0.00 26042.35 10783.65 43611.23 00:14:51.080 [2024-11-20T22:34:51.814Z] =================================================================================================================== 00:14:51.080 [2024-11-20T22:34:51.814Z] Total : 4888.31 19.09 0.00 0.00 26042.35 10783.65 43611.23 00:14:51.339 00:14:51.339 Latency(us) 00:14:51.339 [2024-11-20T22:34:52.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.339 [2024-11-20T22:34:52.073Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:51.339 Nvme1n1 : 1.01 7133.88 27.87 0.00 0.00 17871.73 7030.23 32410.53 00:14:51.339 [2024-11-20T22:34:52.073Z] =================================================================================================================== 00:14:51.339 [2024-11-20T22:34:52.073Z] Total : 7133.88 27.87 0.00 0.00 17871.73 7030.23 32410.53 00:14:51.597 22:34:52 -- target/bdev_io_wait.sh@38 -- # wait 84770 00:14:51.597 22:34:52 -- target/bdev_io_wait.sh@39 -- # wait 84772 00:14:51.597 22:34:52 -- target/bdev_io_wait.sh@40 -- # wait 84773 00:14:51.597 22:34:52 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.597 22:34:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.597 22:34:52 -- common/autotest_common.sh@10 -- # set +x 00:14:51.597 22:34:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.597 22:34:52 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:51.597 22:34:52 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:51.597 22:34:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:51.597 22:34:52 -- nvmf/common.sh@116 -- # sync 00:14:51.597 22:34:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:51.597 22:34:52 -- nvmf/common.sh@119 -- # set +e 00:14:51.597 22:34:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:51.597 22:34:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:51.597 rmmod nvme_tcp 00:14:51.597 rmmod nvme_fabrics 00:14:51.597 rmmod nvme_keyring 00:14:51.597 22:34:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:51.597 22:34:52 -- nvmf/common.sh@123 -- # set -e 00:14:51.597 22:34:52 -- nvmf/common.sh@124 -- # return 0 00:14:51.597 22:34:52 -- nvmf/common.sh@477 -- # '[' -n 84734 ']' 00:14:51.597 22:34:52 -- nvmf/common.sh@478 -- # killprocess 84734 00:14:51.597 22:34:52 -- common/autotest_common.sh@936 -- # '[' -z 84734 ']' 00:14:51.597 22:34:52 -- common/autotest_common.sh@940 -- # kill -0 84734 00:14:51.597 22:34:52 -- common/autotest_common.sh@941 -- # uname 00:14:51.597 22:34:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:51.597 22:34:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84734 00:14:51.597 22:34:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:51.597 22:34:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:51.597 killing process with pid 84734 00:14:51.597 22:34:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84734' 00:14:51.597 22:34:52 -- common/autotest_common.sh@955 -- # kill 84734 00:14:51.597 22:34:52 -- common/autotest_common.sh@960 -- # wait 84734 00:14:51.856 22:34:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:51.856 22:34:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:51.856 22:34:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:51.856 22:34:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:51.856 22:34:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:51.857 22:34:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.857 22:34:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.857 22:34:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.857 22:34:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:51.857 00:14:51.857 real 0m3.505s 00:14:51.857 user 0m15.608s 00:14:51.857 sys 0m1.799s 00:14:51.857 22:34:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:51.857 22:34:52 -- common/autotest_common.sh@10 -- # set +x 00:14:51.857 ************************************ 00:14:51.857 END TEST nvmf_bdev_io_wait 00:14:51.857 ************************************ 00:14:51.857 22:34:52 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:51.857 22:34:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:51.857 22:34:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:51.857 22:34:52 -- common/autotest_common.sh@10 -- # set +x 00:14:52.116 ************************************ 00:14:52.116 START TEST nvmf_queue_depth 00:14:52.116 ************************************ 00:14:52.116 22:34:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:52.116 * Looking for test storage... 00:14:52.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:52.116 22:34:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:52.116 22:34:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:52.116 22:34:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:52.116 22:34:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:52.116 22:34:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:52.116 22:34:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:52.116 22:34:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:52.116 22:34:52 -- scripts/common.sh@335 -- # IFS=.-: 00:14:52.116 22:34:52 -- scripts/common.sh@335 -- # read -ra ver1 00:14:52.116 22:34:52 -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.116 22:34:52 -- scripts/common.sh@336 -- # read -ra ver2 00:14:52.116 22:34:52 -- scripts/common.sh@337 -- # local 'op=<' 00:14:52.116 22:34:52 -- scripts/common.sh@339 -- # ver1_l=2 00:14:52.116 22:34:52 -- scripts/common.sh@340 -- # ver2_l=1 00:14:52.116 22:34:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:52.116 22:34:52 -- scripts/common.sh@343 -- # case "$op" in 00:14:52.116 22:34:52 -- scripts/common.sh@344 -- # : 1 00:14:52.116 22:34:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:52.116 22:34:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.116 22:34:52 -- scripts/common.sh@364 -- # decimal 1 00:14:52.116 22:34:52 -- scripts/common.sh@352 -- # local d=1 00:14:52.116 22:34:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.116 22:34:52 -- scripts/common.sh@354 -- # echo 1 00:14:52.116 22:34:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:52.116 22:34:52 -- scripts/common.sh@365 -- # decimal 2 00:14:52.116 22:34:52 -- scripts/common.sh@352 -- # local d=2 00:14:52.116 22:34:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.116 22:34:52 -- scripts/common.sh@354 -- # echo 2 00:14:52.116 22:34:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:52.116 22:34:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:52.116 22:34:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:52.116 22:34:52 -- scripts/common.sh@367 -- # return 0 00:14:52.116 22:34:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.116 22:34:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:52.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.116 --rc genhtml_branch_coverage=1 00:14:52.116 --rc genhtml_function_coverage=1 00:14:52.116 --rc genhtml_legend=1 00:14:52.116 --rc geninfo_all_blocks=1 00:14:52.116 --rc geninfo_unexecuted_blocks=1 00:14:52.116 00:14:52.116 ' 00:14:52.116 22:34:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:52.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.116 --rc genhtml_branch_coverage=1 00:14:52.116 --rc genhtml_function_coverage=1 00:14:52.116 --rc genhtml_legend=1 00:14:52.116 --rc geninfo_all_blocks=1 00:14:52.116 --rc geninfo_unexecuted_blocks=1 00:14:52.116 00:14:52.116 ' 00:14:52.116 22:34:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:52.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.116 --rc genhtml_branch_coverage=1 00:14:52.116 --rc genhtml_function_coverage=1 00:14:52.116 --rc genhtml_legend=1 00:14:52.116 --rc geninfo_all_blocks=1 00:14:52.116 --rc geninfo_unexecuted_blocks=1 00:14:52.116 00:14:52.116 ' 00:14:52.116 22:34:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:52.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.116 --rc genhtml_branch_coverage=1 00:14:52.116 --rc genhtml_function_coverage=1 00:14:52.117 --rc genhtml_legend=1 00:14:52.117 --rc geninfo_all_blocks=1 00:14:52.117 --rc geninfo_unexecuted_blocks=1 00:14:52.117 00:14:52.117 ' 00:14:52.117 22:34:52 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:52.117 22:34:52 -- nvmf/common.sh@7 -- # uname -s 00:14:52.117 22:34:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.117 22:34:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.117 22:34:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.117 22:34:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.117 22:34:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.117 22:34:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.117 22:34:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.117 22:34:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.117 22:34:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.117 22:34:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.117 22:34:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:14:52.117 22:34:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:14:52.117 22:34:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.117 22:34:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.117 22:34:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:52.117 22:34:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:52.117 22:34:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.117 22:34:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.117 22:34:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.117 22:34:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.117 22:34:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.117 22:34:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.117 22:34:52 -- paths/export.sh@5 -- # export PATH 00:14:52.117 22:34:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.117 22:34:52 -- nvmf/common.sh@46 -- # : 0 00:14:52.117 22:34:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:52.117 22:34:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:52.117 22:34:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:52.117 22:34:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.117 22:34:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.117 22:34:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:52.117 22:34:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:52.117 22:34:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:52.117 22:34:52 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:52.117 22:34:52 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:52.117 22:34:52 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:52.117 22:34:52 -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:52.117 22:34:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:52.117 22:34:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.117 22:34:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:52.117 22:34:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:52.117 22:34:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:52.117 22:34:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.117 22:34:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.117 22:34:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.117 22:34:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:52.117 22:34:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:52.117 22:34:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:52.117 22:34:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:52.117 22:34:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:52.117 22:34:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:52.117 22:34:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.117 22:34:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:52.117 22:34:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:52.117 22:34:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:52.117 22:34:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:52.117 22:34:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:52.117 22:34:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:52.117 22:34:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.117 22:34:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:52.117 22:34:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:52.117 22:34:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:52.117 22:34:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:52.117 22:34:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:52.117 22:34:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:52.117 Cannot find device "nvmf_tgt_br" 00:14:52.117 22:34:52 -- nvmf/common.sh@154 -- # true 00:14:52.117 22:34:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:52.117 Cannot find device "nvmf_tgt_br2" 00:14:52.117 22:34:52 -- nvmf/common.sh@155 -- # true 00:14:52.117 22:34:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:52.117 22:34:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:52.376 Cannot find device "nvmf_tgt_br" 00:14:52.376 22:34:52 -- nvmf/common.sh@157 -- # true 00:14:52.376 22:34:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:52.376 Cannot find device "nvmf_tgt_br2" 00:14:52.376 22:34:52 -- nvmf/common.sh@158 -- # true 00:14:52.376 22:34:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:52.376 22:34:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:52.376 22:34:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:52.376 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.376 22:34:52 -- nvmf/common.sh@161 -- # true 00:14:52.376 22:34:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:52.376 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.376 22:34:52 -- nvmf/common.sh@162 -- # true 00:14:52.376 22:34:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:52.376 22:34:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:52.376 22:34:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:52.376 22:34:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:52.376 22:34:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:52.376 22:34:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:52.376 22:34:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:52.376 22:34:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:52.376 22:34:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:52.376 22:34:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:52.376 22:34:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:52.376 22:34:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:52.376 22:34:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:52.376 22:34:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:52.376 22:34:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:52.376 22:34:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:52.376 22:34:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:52.376 22:34:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:52.377 22:34:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:52.377 22:34:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:52.377 22:34:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:52.377 22:34:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:52.377 22:34:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:52.377 22:34:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:52.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:52.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:14:52.377 00:14:52.377 --- 10.0.0.2 ping statistics --- 00:14:52.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.377 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:14:52.377 22:34:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:52.377 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:52.377 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:14:52.377 00:14:52.377 --- 10.0.0.3 ping statistics --- 00:14:52.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.377 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:52.377 22:34:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:52.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:52.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:14:52.377 00:14:52.377 --- 10.0.0.1 ping statistics --- 00:14:52.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.377 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:14:52.377 22:34:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:52.377 22:34:53 -- nvmf/common.sh@421 -- # return 0 00:14:52.377 22:34:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:52.377 22:34:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:52.377 22:34:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:52.377 22:34:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:52.377 22:34:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:52.377 22:34:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:52.377 22:34:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:52.636 22:34:53 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:52.636 22:34:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:52.636 22:34:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:52.636 22:34:53 -- common/autotest_common.sh@10 -- # set +x 00:14:52.636 22:34:53 -- nvmf/common.sh@469 -- # nvmfpid=84994 00:14:52.636 22:34:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:52.636 22:34:53 -- nvmf/common.sh@470 -- # waitforlisten 84994 00:14:52.636 22:34:53 -- common/autotest_common.sh@829 -- # '[' -z 84994 ']' 00:14:52.636 22:34:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.636 22:34:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:52.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.636 22:34:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.636 22:34:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:52.636 22:34:53 -- common/autotest_common.sh@10 -- # set +x 00:14:52.636 [2024-11-20 22:34:53.166044] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:52.636 [2024-11-20 22:34:53.166106] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.636 [2024-11-20 22:34:53.298142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.894 [2024-11-20 22:34:53.370857] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:52.894 [2024-11-20 22:34:53.370981] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.894 [2024-11-20 22:34:53.370993] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.894 [2024-11-20 22:34:53.371001] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.894 [2024-11-20 22:34:53.371025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.831 22:34:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:53.831 22:34:54 -- common/autotest_common.sh@862 -- # return 0 00:14:53.831 22:34:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:53.831 22:34:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:53.831 22:34:54 -- common/autotest_common.sh@10 -- # set +x 00:14:53.831 22:34:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.831 22:34:54 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:53.831 22:34:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.831 22:34:54 -- common/autotest_common.sh@10 -- # set +x 00:14:53.831 [2024-11-20 22:34:54.242778] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.831 22:34:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.831 22:34:54 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:53.831 22:34:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.831 22:34:54 -- common/autotest_common.sh@10 -- # set +x 00:14:53.831 Malloc0 00:14:53.831 22:34:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.831 22:34:54 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:53.831 22:34:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.831 22:34:54 -- common/autotest_common.sh@10 -- # set +x 00:14:53.831 22:34:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.831 22:34:54 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:53.831 22:34:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.831 22:34:54 -- common/autotest_common.sh@10 -- # set +x 00:14:53.831 22:34:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.831 22:34:54 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:53.831 22:34:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.831 22:34:54 -- common/autotest_common.sh@10 -- # set +x 00:14:53.831 [2024-11-20 22:34:54.312722] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:53.831 22:34:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.831 22:34:54 -- target/queue_depth.sh@30 -- # bdevperf_pid=85044 00:14:53.831 22:34:54 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:53.831 22:34:54 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:53.831 22:34:54 -- target/queue_depth.sh@33 -- # waitforlisten 85044 /var/tmp/bdevperf.sock 00:14:53.831 22:34:54 -- common/autotest_common.sh@829 -- # '[' -z 85044 ']' 00:14:53.831 22:34:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:53.831 22:34:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:53.831 22:34:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:53.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:53.831 22:34:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:53.831 22:34:54 -- common/autotest_common.sh@10 -- # set +x 00:14:53.831 [2024-11-20 22:34:54.364184] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:53.831 [2024-11-20 22:34:54.364516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85044 ] 00:14:53.831 [2024-11-20 22:34:54.505849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.090 [2024-11-20 22:34:54.580819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.657 22:34:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.657 22:34:55 -- common/autotest_common.sh@862 -- # return 0 00:14:54.657 22:34:55 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:54.657 22:34:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.657 22:34:55 -- common/autotest_common.sh@10 -- # set +x 00:14:54.915 NVMe0n1 00:14:54.915 22:34:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.915 22:34:55 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:54.915 Running I/O for 10 seconds... 00:15:07.119 00:15:07.119 Latency(us) 00:15:07.119 [2024-11-20T22:35:07.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.119 [2024-11-20T22:35:07.853Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:07.119 Verification LBA range: start 0x0 length 0x4000 00:15:07.119 NVMe0n1 : 10.05 17003.99 66.42 0.00 0.00 60026.63 12988.04 70063.94 00:15:07.119 [2024-11-20T22:35:07.853Z] =================================================================================================================== 00:15:07.119 [2024-11-20T22:35:07.853Z] Total : 17003.99 66.42 0.00 0.00 60026.63 12988.04 70063.94 00:15:07.119 0 00:15:07.119 22:35:05 -- target/queue_depth.sh@39 -- # killprocess 85044 00:15:07.119 22:35:05 -- common/autotest_common.sh@936 -- # '[' -z 85044 ']' 00:15:07.119 22:35:05 -- common/autotest_common.sh@940 -- # kill -0 85044 00:15:07.119 22:35:05 -- common/autotest_common.sh@941 -- # uname 00:15:07.119 22:35:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:07.119 22:35:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85044 00:15:07.119 killing process with pid 85044 00:15:07.119 Received shutdown signal, test time was about 10.000000 seconds 00:15:07.119 00:15:07.119 Latency(us) 00:15:07.119 [2024-11-20T22:35:07.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.119 [2024-11-20T22:35:07.853Z] =================================================================================================================== 00:15:07.119 [2024-11-20T22:35:07.853Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:07.119 22:35:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:07.119 22:35:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:07.119 22:35:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85044' 00:15:07.119 22:35:05 -- common/autotest_common.sh@955 -- # kill 85044 00:15:07.119 22:35:05 -- common/autotest_common.sh@960 -- # wait 85044 00:15:07.119 22:35:05 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:07.119 22:35:05 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:07.119 22:35:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:07.119 22:35:05 -- nvmf/common.sh@116 -- # sync 00:15:07.119 22:35:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:07.119 22:35:05 -- nvmf/common.sh@119 -- # set +e 00:15:07.119 22:35:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:07.119 22:35:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:07.119 rmmod nvme_tcp 00:15:07.119 rmmod nvme_fabrics 00:15:07.120 rmmod nvme_keyring 00:15:07.120 22:35:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:07.120 22:35:06 -- nvmf/common.sh@123 -- # set -e 00:15:07.120 22:35:06 -- nvmf/common.sh@124 -- # return 0 00:15:07.120 22:35:06 -- nvmf/common.sh@477 -- # '[' -n 84994 ']' 00:15:07.120 22:35:06 -- nvmf/common.sh@478 -- # killprocess 84994 00:15:07.120 22:35:06 -- common/autotest_common.sh@936 -- # '[' -z 84994 ']' 00:15:07.120 22:35:06 -- common/autotest_common.sh@940 -- # kill -0 84994 00:15:07.120 22:35:06 -- common/autotest_common.sh@941 -- # uname 00:15:07.120 22:35:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:07.120 22:35:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84994 00:15:07.120 killing process with pid 84994 00:15:07.120 22:35:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:07.120 22:35:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:07.120 22:35:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84994' 00:15:07.120 22:35:06 -- common/autotest_common.sh@955 -- # kill 84994 00:15:07.120 22:35:06 -- common/autotest_common.sh@960 -- # wait 84994 00:15:07.120 22:35:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:07.120 22:35:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:07.120 22:35:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:07.120 22:35:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:07.120 22:35:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:07.120 22:35:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.120 22:35:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.120 22:35:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.120 22:35:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:07.120 00:15:07.120 real 0m13.732s 00:15:07.120 user 0m22.967s 00:15:07.120 sys 0m2.572s 00:15:07.120 22:35:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:07.120 22:35:06 -- common/autotest_common.sh@10 -- # set +x 00:15:07.120 ************************************ 00:15:07.120 END TEST nvmf_queue_depth 00:15:07.120 ************************************ 00:15:07.120 22:35:06 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:07.120 22:35:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:07.120 22:35:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:07.120 22:35:06 -- common/autotest_common.sh@10 -- # set +x 00:15:07.120 ************************************ 00:15:07.120 START TEST nvmf_multipath 00:15:07.120 ************************************ 00:15:07.120 22:35:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:07.120 * Looking for test storage... 00:15:07.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:07.120 22:35:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:07.120 22:35:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:07.120 22:35:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:07.120 22:35:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:07.120 22:35:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:07.120 22:35:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:07.120 22:35:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:07.120 22:35:06 -- scripts/common.sh@335 -- # IFS=.-: 00:15:07.120 22:35:06 -- scripts/common.sh@335 -- # read -ra ver1 00:15:07.120 22:35:06 -- scripts/common.sh@336 -- # IFS=.-: 00:15:07.120 22:35:06 -- scripts/common.sh@336 -- # read -ra ver2 00:15:07.120 22:35:06 -- scripts/common.sh@337 -- # local 'op=<' 00:15:07.120 22:35:06 -- scripts/common.sh@339 -- # ver1_l=2 00:15:07.120 22:35:06 -- scripts/common.sh@340 -- # ver2_l=1 00:15:07.120 22:35:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:07.120 22:35:06 -- scripts/common.sh@343 -- # case "$op" in 00:15:07.120 22:35:06 -- scripts/common.sh@344 -- # : 1 00:15:07.120 22:35:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:07.120 22:35:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:07.120 22:35:06 -- scripts/common.sh@364 -- # decimal 1 00:15:07.120 22:35:06 -- scripts/common.sh@352 -- # local d=1 00:15:07.120 22:35:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:07.120 22:35:06 -- scripts/common.sh@354 -- # echo 1 00:15:07.120 22:35:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:07.120 22:35:06 -- scripts/common.sh@365 -- # decimal 2 00:15:07.120 22:35:06 -- scripts/common.sh@352 -- # local d=2 00:15:07.120 22:35:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:07.120 22:35:06 -- scripts/common.sh@354 -- # echo 2 00:15:07.120 22:35:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:07.120 22:35:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:07.120 22:35:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:07.120 22:35:06 -- scripts/common.sh@367 -- # return 0 00:15:07.120 22:35:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:07.120 22:35:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:07.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.120 --rc genhtml_branch_coverage=1 00:15:07.120 --rc genhtml_function_coverage=1 00:15:07.120 --rc genhtml_legend=1 00:15:07.120 --rc geninfo_all_blocks=1 00:15:07.120 --rc geninfo_unexecuted_blocks=1 00:15:07.120 00:15:07.120 ' 00:15:07.120 22:35:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:07.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.120 --rc genhtml_branch_coverage=1 00:15:07.120 --rc genhtml_function_coverage=1 00:15:07.120 --rc genhtml_legend=1 00:15:07.120 --rc geninfo_all_blocks=1 00:15:07.120 --rc geninfo_unexecuted_blocks=1 00:15:07.120 00:15:07.120 ' 00:15:07.120 22:35:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:07.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.120 --rc genhtml_branch_coverage=1 00:15:07.120 --rc genhtml_function_coverage=1 00:15:07.120 --rc genhtml_legend=1 00:15:07.120 --rc geninfo_all_blocks=1 00:15:07.120 --rc geninfo_unexecuted_blocks=1 00:15:07.120 00:15:07.120 ' 00:15:07.120 22:35:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:07.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.120 --rc genhtml_branch_coverage=1 00:15:07.120 --rc genhtml_function_coverage=1 00:15:07.120 --rc genhtml_legend=1 00:15:07.120 --rc geninfo_all_blocks=1 00:15:07.120 --rc geninfo_unexecuted_blocks=1 00:15:07.120 00:15:07.120 ' 00:15:07.120 22:35:06 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:07.120 22:35:06 -- nvmf/common.sh@7 -- # uname -s 00:15:07.120 22:35:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.120 22:35:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.120 22:35:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.120 22:35:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.120 22:35:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.120 22:35:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.120 22:35:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.120 22:35:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.120 22:35:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.120 22:35:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.120 22:35:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:15:07.120 22:35:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:15:07.120 22:35:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.120 22:35:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.120 22:35:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:07.120 22:35:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:07.120 22:35:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.120 22:35:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.120 22:35:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.120 22:35:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.120 22:35:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.120 22:35:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.120 22:35:06 -- paths/export.sh@5 -- # export PATH 00:15:07.120 22:35:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.120 22:35:06 -- nvmf/common.sh@46 -- # : 0 00:15:07.120 22:35:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:07.120 22:35:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:07.120 22:35:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:07.120 22:35:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.120 22:35:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.120 22:35:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:07.120 22:35:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:07.120 22:35:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:07.120 22:35:06 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:07.120 22:35:06 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:07.121 22:35:06 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:07.121 22:35:06 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:07.121 22:35:06 -- target/multipath.sh@43 -- # nvmftestinit 00:15:07.121 22:35:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:07.121 22:35:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.121 22:35:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:07.121 22:35:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:07.121 22:35:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:07.121 22:35:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.121 22:35:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.121 22:35:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.121 22:35:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:07.121 22:35:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:07.121 22:35:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:07.121 22:35:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:07.121 22:35:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:07.121 22:35:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:07.121 22:35:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:07.121 22:35:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:07.121 22:35:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:07.121 22:35:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:07.121 22:35:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:07.121 22:35:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:07.121 22:35:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:07.121 22:35:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:07.121 22:35:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:07.121 22:35:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:07.121 22:35:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:07.121 22:35:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:07.121 22:35:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:07.121 22:35:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:07.121 Cannot find device "nvmf_tgt_br" 00:15:07.121 22:35:06 -- nvmf/common.sh@154 -- # true 00:15:07.121 22:35:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:07.121 Cannot find device "nvmf_tgt_br2" 00:15:07.121 22:35:06 -- nvmf/common.sh@155 -- # true 00:15:07.121 22:35:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:07.121 22:35:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:07.121 Cannot find device "nvmf_tgt_br" 00:15:07.121 22:35:06 -- nvmf/common.sh@157 -- # true 00:15:07.121 22:35:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:07.121 Cannot find device "nvmf_tgt_br2" 00:15:07.121 22:35:06 -- nvmf/common.sh@158 -- # true 00:15:07.121 22:35:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:07.121 22:35:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:07.121 22:35:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:07.121 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:07.121 22:35:06 -- nvmf/common.sh@161 -- # true 00:15:07.121 22:35:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:07.121 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:07.121 22:35:06 -- nvmf/common.sh@162 -- # true 00:15:07.121 22:35:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:07.121 22:35:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:07.121 22:35:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:07.121 22:35:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:07.121 22:35:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:07.121 22:35:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:07.121 22:35:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:07.121 22:35:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:07.121 22:35:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:07.121 22:35:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:07.121 22:35:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:07.121 22:35:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:07.121 22:35:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:07.121 22:35:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:07.121 22:35:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:07.121 22:35:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:07.121 22:35:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:07.121 22:35:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:07.121 22:35:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:07.121 22:35:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:07.121 22:35:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:07.121 22:35:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:07.121 22:35:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:07.121 22:35:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:07.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:07.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:15:07.121 00:15:07.121 --- 10.0.0.2 ping statistics --- 00:15:07.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.121 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:07.121 22:35:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:07.121 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:07.121 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:15:07.121 00:15:07.121 --- 10.0.0.3 ping statistics --- 00:15:07.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.121 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:07.121 22:35:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:07.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:07.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:07.121 00:15:07.121 --- 10.0.0.1 ping statistics --- 00:15:07.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.121 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:07.121 22:35:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:07.121 22:35:06 -- nvmf/common.sh@421 -- # return 0 00:15:07.121 22:35:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:07.121 22:35:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:07.121 22:35:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:07.121 22:35:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:07.121 22:35:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:07.121 22:35:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:07.121 22:35:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:07.121 22:35:06 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:07.121 22:35:06 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:07.121 22:35:06 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:07.121 22:35:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:07.121 22:35:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:07.121 22:35:06 -- common/autotest_common.sh@10 -- # set +x 00:15:07.121 22:35:06 -- nvmf/common.sh@469 -- # nvmfpid=85387 00:15:07.121 22:35:06 -- nvmf/common.sh@470 -- # waitforlisten 85387 00:15:07.121 22:35:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:07.121 22:35:06 -- common/autotest_common.sh@829 -- # '[' -z 85387 ']' 00:15:07.121 22:35:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.121 22:35:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:07.121 22:35:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.121 22:35:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:07.121 22:35:06 -- common/autotest_common.sh@10 -- # set +x 00:15:07.121 [2024-11-20 22:35:06.965074] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:07.121 [2024-11-20 22:35:06.965175] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:07.121 [2024-11-20 22:35:07.103779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:07.121 [2024-11-20 22:35:07.174896] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:07.121 [2024-11-20 22:35:07.175044] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:07.121 [2024-11-20 22:35:07.175056] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:07.121 [2024-11-20 22:35:07.175064] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:07.121 [2024-11-20 22:35:07.175221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:07.121 [2024-11-20 22:35:07.175751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:07.121 [2024-11-20 22:35:07.175913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:07.121 [2024-11-20 22:35:07.175920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.380 22:35:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:07.380 22:35:07 -- common/autotest_common.sh@862 -- # return 0 00:15:07.380 22:35:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:07.380 22:35:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:07.380 22:35:07 -- common/autotest_common.sh@10 -- # set +x 00:15:07.380 22:35:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:07.380 22:35:07 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:07.637 [2024-11-20 22:35:08.266156] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:07.637 22:35:08 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:07.895 Malloc0 00:15:07.895 22:35:08 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:08.154 22:35:08 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:08.413 22:35:09 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.670 [2024-11-20 22:35:09.205704] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:08.670 22:35:09 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:08.927 [2024-11-20 22:35:09.413938] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:08.927 22:35:09 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:09.185 22:35:09 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:09.185 22:35:09 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:09.185 22:35:09 -- common/autotest_common.sh@1187 -- # local i=0 00:15:09.186 22:35:09 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:09.186 22:35:09 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:09.186 22:35:09 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:11.716 22:35:11 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:11.716 22:35:11 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:11.716 22:35:11 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:11.716 22:35:11 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:11.716 22:35:11 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:11.716 22:35:11 -- common/autotest_common.sh@1197 -- # return 0 00:15:11.716 22:35:11 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:11.716 22:35:11 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:11.716 22:35:11 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:11.716 22:35:11 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:11.716 22:35:11 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:11.716 22:35:11 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:11.716 22:35:11 -- target/multipath.sh@38 -- # return 0 00:15:11.716 22:35:11 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:11.716 22:35:11 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:11.716 22:35:11 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:11.716 22:35:11 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:11.716 22:35:11 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:11.716 22:35:11 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:11.716 22:35:11 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:11.716 22:35:11 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:11.716 22:35:11 -- target/multipath.sh@22 -- # local timeout=20 00:15:11.716 22:35:11 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:11.716 22:35:11 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:11.716 22:35:11 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:11.716 22:35:11 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:11.716 22:35:11 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:11.716 22:35:11 -- target/multipath.sh@22 -- # local timeout=20 00:15:11.716 22:35:11 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:11.716 22:35:11 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:11.716 22:35:11 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:11.716 22:35:11 -- target/multipath.sh@85 -- # echo numa 00:15:11.716 22:35:11 -- target/multipath.sh@88 -- # fio_pid=85525 00:15:11.716 22:35:11 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:11.716 22:35:11 -- target/multipath.sh@90 -- # sleep 1 00:15:11.716 [global] 00:15:11.717 thread=1 00:15:11.717 invalidate=1 00:15:11.717 rw=randrw 00:15:11.717 time_based=1 00:15:11.717 runtime=6 00:15:11.717 ioengine=libaio 00:15:11.717 direct=1 00:15:11.717 bs=4096 00:15:11.717 iodepth=128 00:15:11.717 norandommap=0 00:15:11.717 numjobs=1 00:15:11.717 00:15:11.717 verify_dump=1 00:15:11.717 verify_backlog=512 00:15:11.717 verify_state_save=0 00:15:11.717 do_verify=1 00:15:11.717 verify=crc32c-intel 00:15:11.717 [job0] 00:15:11.717 filename=/dev/nvme0n1 00:15:11.717 Could not set queue depth (nvme0n1) 00:15:11.717 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:11.717 fio-3.35 00:15:11.717 Starting 1 thread 00:15:12.283 22:35:12 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:12.541 22:35:13 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:12.799 22:35:13 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:12.799 22:35:13 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:12.799 22:35:13 -- target/multipath.sh@22 -- # local timeout=20 00:15:12.799 22:35:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:12.799 22:35:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:12.799 22:35:13 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:12.799 22:35:13 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:12.799 22:35:13 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:12.799 22:35:13 -- target/multipath.sh@22 -- # local timeout=20 00:15:12.799 22:35:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:12.799 22:35:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:12.799 22:35:13 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:12.799 22:35:13 -- target/multipath.sh@25 -- # sleep 1s 00:15:13.734 22:35:14 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:13.734 22:35:14 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:13.734 22:35:14 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:13.734 22:35:14 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:13.992 22:35:14 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:14.557 22:35:14 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:14.557 22:35:14 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:14.557 22:35:14 -- target/multipath.sh@22 -- # local timeout=20 00:15:14.557 22:35:14 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:14.557 22:35:14 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:14.557 22:35:14 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:14.557 22:35:14 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:14.557 22:35:14 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:14.557 22:35:14 -- target/multipath.sh@22 -- # local timeout=20 00:15:14.557 22:35:14 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:14.557 22:35:14 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:14.557 22:35:14 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:14.557 22:35:14 -- target/multipath.sh@25 -- # sleep 1s 00:15:15.491 22:35:15 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:15.491 22:35:15 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:15.491 22:35:15 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:15.491 22:35:15 -- target/multipath.sh@104 -- # wait 85525 00:15:18.017 00:15:18.017 job0: (groupid=0, jobs=1): err= 0: pid=85550: Wed Nov 20 22:35:18 2024 00:15:18.017 read: IOPS=13.1k, BW=51.1MiB/s (53.6MB/s)(307MiB/6003msec) 00:15:18.017 slat (usec): min=3, max=5359, avg=43.57, stdev=190.65 00:15:18.017 clat (usec): min=708, max=16195, avg=6709.85, stdev=1115.07 00:15:18.017 lat (usec): min=729, max=16230, avg=6753.42, stdev=1120.72 00:15:18.017 clat percentiles (usec): 00:15:18.017 | 1.00th=[ 4228], 5.00th=[ 5145], 10.00th=[ 5538], 20.00th=[ 5866], 00:15:18.017 | 30.00th=[ 6063], 40.00th=[ 6325], 50.00th=[ 6652], 60.00th=[ 6915], 00:15:18.017 | 70.00th=[ 7177], 80.00th=[ 7504], 90.00th=[ 8029], 95.00th=[ 8586], 00:15:18.017 | 99.00th=[10028], 99.50th=[10683], 99.90th=[12256], 99.95th=[15270], 00:15:18.017 | 99.99th=[15664] 00:15:18.017 bw ( KiB/s): min=12320, max=32424, per=51.80%, avg=27106.18, stdev=6349.13, samples=11 00:15:18.017 iops : min= 3080, max= 8106, avg=6776.55, stdev=1587.28, samples=11 00:15:18.017 write: IOPS=7562, BW=29.5MiB/s (31.0MB/s)(156MiB/5284msec); 0 zone resets 00:15:18.017 slat (usec): min=14, max=2004, avg=54.67, stdev=129.80 00:15:18.017 clat (usec): min=472, max=15571, avg=5856.34, stdev=946.31 00:15:18.017 lat (usec): min=497, max=15603, avg=5911.01, stdev=948.57 00:15:18.017 clat percentiles (usec): 00:15:18.017 | 1.00th=[ 3359], 5.00th=[ 4228], 10.00th=[ 4883], 20.00th=[ 5276], 00:15:18.017 | 30.00th=[ 5538], 40.00th=[ 5735], 50.00th=[ 5866], 60.00th=[ 6063], 00:15:18.017 | 70.00th=[ 6194], 80.00th=[ 6390], 90.00th=[ 6718], 95.00th=[ 7111], 00:15:18.017 | 99.00th=[ 8848], 99.50th=[ 9503], 99.90th=[12780], 99.95th=[15270], 00:15:18.017 | 99.99th=[15401] 00:15:18.017 bw ( KiB/s): min=12408, max=32000, per=89.50%, avg=27074.91, stdev=6134.87, samples=11 00:15:18.017 iops : min= 3102, max= 8000, avg=6768.73, stdev=1533.72, samples=11 00:15:18.017 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:15:18.017 lat (msec) : 2=0.05%, 4=1.70%, 10=97.42%, 20=0.82% 00:15:18.017 cpu : usr=6.16%, sys=24.97%, ctx=7264, majf=0, minf=127 00:15:18.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:18.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:18.017 issued rwts: total=78532,39961,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.017 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:18.017 00:15:18.017 Run status group 0 (all jobs): 00:15:18.017 READ: bw=51.1MiB/s (53.6MB/s), 51.1MiB/s-51.1MiB/s (53.6MB/s-53.6MB/s), io=307MiB (322MB), run=6003-6003msec 00:15:18.017 WRITE: bw=29.5MiB/s (31.0MB/s), 29.5MiB/s-29.5MiB/s (31.0MB/s-31.0MB/s), io=156MiB (164MB), run=5284-5284msec 00:15:18.017 00:15:18.017 Disk stats (read/write): 00:15:18.017 nvme0n1: ios=77511/39068, merge=0/0, ticks=483020/212071, in_queue=695091, util=98.63% 00:15:18.017 22:35:18 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:18.017 22:35:18 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:18.274 22:35:18 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:18.275 22:35:18 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:18.275 22:35:18 -- target/multipath.sh@22 -- # local timeout=20 00:15:18.275 22:35:18 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:18.275 22:35:18 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:18.275 22:35:18 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:18.275 22:35:18 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:18.275 22:35:18 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:18.275 22:35:18 -- target/multipath.sh@22 -- # local timeout=20 00:15:18.275 22:35:18 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:18.275 22:35:18 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:18.275 22:35:18 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:18.275 22:35:18 -- target/multipath.sh@25 -- # sleep 1s 00:15:19.208 22:35:19 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:19.208 22:35:19 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:19.208 22:35:19 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:19.208 22:35:19 -- target/multipath.sh@113 -- # echo round-robin 00:15:19.208 22:35:19 -- target/multipath.sh@116 -- # fio_pid=85674 00:15:19.208 22:35:19 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:19.208 22:35:19 -- target/multipath.sh@118 -- # sleep 1 00:15:19.208 [global] 00:15:19.208 thread=1 00:15:19.208 invalidate=1 00:15:19.208 rw=randrw 00:15:19.208 time_based=1 00:15:19.208 runtime=6 00:15:19.208 ioengine=libaio 00:15:19.208 direct=1 00:15:19.208 bs=4096 00:15:19.208 iodepth=128 00:15:19.208 norandommap=0 00:15:19.208 numjobs=1 00:15:19.208 00:15:19.208 verify_dump=1 00:15:19.208 verify_backlog=512 00:15:19.208 verify_state_save=0 00:15:19.208 do_verify=1 00:15:19.208 verify=crc32c-intel 00:15:19.208 [job0] 00:15:19.208 filename=/dev/nvme0n1 00:15:19.208 Could not set queue depth (nvme0n1) 00:15:19.208 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:19.208 fio-3.35 00:15:19.208 Starting 1 thread 00:15:20.142 22:35:20 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:20.400 22:35:21 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:20.658 22:35:21 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:20.658 22:35:21 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:20.658 22:35:21 -- target/multipath.sh@22 -- # local timeout=20 00:15:20.658 22:35:21 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:20.658 22:35:21 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:20.658 22:35:21 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:20.658 22:35:21 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:20.658 22:35:21 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:20.658 22:35:21 -- target/multipath.sh@22 -- # local timeout=20 00:15:20.658 22:35:21 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:20.658 22:35:21 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:20.658 22:35:21 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:20.658 22:35:21 -- target/multipath.sh@25 -- # sleep 1s 00:15:21.592 22:35:22 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:21.592 22:35:22 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:21.592 22:35:22 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:21.592 22:35:22 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:21.850 22:35:22 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:22.108 22:35:22 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:22.108 22:35:22 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:22.108 22:35:22 -- target/multipath.sh@22 -- # local timeout=20 00:15:22.108 22:35:22 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:22.108 22:35:22 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:22.108 22:35:22 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:22.108 22:35:22 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:22.108 22:35:22 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:22.108 22:35:22 -- target/multipath.sh@22 -- # local timeout=20 00:15:22.108 22:35:22 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:22.108 22:35:22 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:22.108 22:35:22 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:22.108 22:35:22 -- target/multipath.sh@25 -- # sleep 1s 00:15:23.044 22:35:23 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:23.044 22:35:23 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:23.044 22:35:23 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:23.044 22:35:23 -- target/multipath.sh@132 -- # wait 85674 00:15:25.591 00:15:25.591 job0: (groupid=0, jobs=1): err= 0: pid=85701: Wed Nov 20 22:35:26 2024 00:15:25.591 read: IOPS=13.0k, BW=50.8MiB/s (53.2MB/s)(305MiB/6006msec) 00:15:25.591 slat (usec): min=3, max=4425, avg=39.00, stdev=183.27 00:15:25.591 clat (usec): min=361, max=18119, avg=6829.50, stdev=1858.23 00:15:25.591 lat (usec): min=375, max=18131, avg=6868.50, stdev=1863.92 00:15:25.591 clat percentiles (usec): 00:15:25.591 | 1.00th=[ 2040], 5.00th=[ 3490], 10.00th=[ 4752], 20.00th=[ 5997], 00:15:25.591 | 30.00th=[ 6194], 40.00th=[ 6390], 50.00th=[ 6718], 60.00th=[ 7046], 00:15:25.591 | 70.00th=[ 7439], 80.00th=[ 7832], 90.00th=[ 8848], 95.00th=[10159], 00:15:25.591 | 99.00th=[12649], 99.50th=[13435], 99.90th=[15401], 99.95th=[16581], 00:15:25.591 | 99.99th=[17695] 00:15:25.591 bw ( KiB/s): min=10856, max=34680, per=53.48%, avg=27799.91, stdev=7813.52, samples=11 00:15:25.591 iops : min= 2714, max= 8670, avg=6949.91, stdev=1953.38, samples=11 00:15:25.591 write: IOPS=7730, BW=30.2MiB/s (31.7MB/s)(153MiB/5065msec); 0 zone resets 00:15:25.591 slat (usec): min=10, max=3987, avg=50.39, stdev=126.20 00:15:25.591 clat (usec): min=604, max=15871, avg=5849.76, stdev=1713.44 00:15:25.591 lat (usec): min=667, max=15898, avg=5900.15, stdev=1717.80 00:15:25.591 clat percentiles (usec): 00:15:25.591 | 1.00th=[ 1631], 5.00th=[ 2540], 10.00th=[ 3326], 20.00th=[ 4948], 00:15:25.591 | 30.00th=[ 5538], 40.00th=[ 5800], 50.00th=[ 5997], 60.00th=[ 6194], 00:15:25.591 | 70.00th=[ 6456], 80.00th=[ 6718], 90.00th=[ 7504], 95.00th=[ 8848], 00:15:25.591 | 99.00th=[10552], 99.50th=[11338], 99.90th=[13173], 99.95th=[14222], 00:15:25.591 | 99.99th=[15008] 00:15:25.591 bw ( KiB/s): min=11616, max=34696, per=89.90%, avg=27800.64, stdev=7565.26, samples=11 00:15:25.591 iops : min= 2904, max= 8674, avg=6950.09, stdev=1891.31, samples=11 00:15:25.591 lat (usec) : 500=0.02%, 750=0.07%, 1000=0.06% 00:15:25.591 lat (msec) : 2=1.31%, 4=8.02%, 10=86.36%, 20=4.17% 00:15:25.591 cpu : usr=6.28%, sys=23.31%, ctx=7556, majf=0, minf=145 00:15:25.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:25.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:25.591 issued rwts: total=78054,39155,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.592 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:25.592 00:15:25.592 Run status group 0 (all jobs): 00:15:25.592 READ: bw=50.8MiB/s (53.2MB/s), 50.8MiB/s-50.8MiB/s (53.2MB/s-53.2MB/s), io=305MiB (320MB), run=6006-6006msec 00:15:25.592 WRITE: bw=30.2MiB/s (31.7MB/s), 30.2MiB/s-30.2MiB/s (31.7MB/s-31.7MB/s), io=153MiB (160MB), run=5065-5065msec 00:15:25.592 00:15:25.592 Disk stats (read/write): 00:15:25.592 nvme0n1: ios=76483/38901, merge=0/0, ticks=487102/211359, in_queue=698461, util=98.63% 00:15:25.592 22:35:26 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:25.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:25.592 22:35:26 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:25.592 22:35:26 -- common/autotest_common.sh@1208 -- # local i=0 00:15:25.592 22:35:26 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:25.592 22:35:26 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:25.592 22:35:26 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:25.592 22:35:26 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:25.592 22:35:26 -- common/autotest_common.sh@1220 -- # return 0 00:15:25.592 22:35:26 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:26.159 22:35:26 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:26.159 22:35:26 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:26.159 22:35:26 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:26.159 22:35:26 -- target/multipath.sh@144 -- # nvmftestfini 00:15:26.159 22:35:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:26.159 22:35:26 -- nvmf/common.sh@116 -- # sync 00:15:26.159 22:35:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:26.159 22:35:26 -- nvmf/common.sh@119 -- # set +e 00:15:26.159 22:35:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:26.159 22:35:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:26.159 rmmod nvme_tcp 00:15:26.159 rmmod nvme_fabrics 00:15:26.159 rmmod nvme_keyring 00:15:26.159 22:35:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:26.159 22:35:26 -- nvmf/common.sh@123 -- # set -e 00:15:26.159 22:35:26 -- nvmf/common.sh@124 -- # return 0 00:15:26.159 22:35:26 -- nvmf/common.sh@477 -- # '[' -n 85387 ']' 00:15:26.159 22:35:26 -- nvmf/common.sh@478 -- # killprocess 85387 00:15:26.159 22:35:26 -- common/autotest_common.sh@936 -- # '[' -z 85387 ']' 00:15:26.159 22:35:26 -- common/autotest_common.sh@940 -- # kill -0 85387 00:15:26.159 22:35:26 -- common/autotest_common.sh@941 -- # uname 00:15:26.159 22:35:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:26.159 22:35:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85387 00:15:26.159 22:35:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:26.159 22:35:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:26.159 killing process with pid 85387 00:15:26.159 22:35:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85387' 00:15:26.159 22:35:26 -- common/autotest_common.sh@955 -- # kill 85387 00:15:26.159 22:35:26 -- common/autotest_common.sh@960 -- # wait 85387 00:15:26.418 22:35:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:26.418 22:35:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:26.418 22:35:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:26.418 22:35:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:26.418 22:35:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:26.418 22:35:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.418 22:35:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.418 22:35:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.418 22:35:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:26.418 00:15:26.418 real 0m20.680s 00:15:26.418 user 1m20.810s 00:15:26.418 sys 0m6.475s 00:15:26.418 22:35:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:26.418 ************************************ 00:15:26.418 END TEST nvmf_multipath 00:15:26.418 ************************************ 00:15:26.418 22:35:27 -- common/autotest_common.sh@10 -- # set +x 00:15:26.418 22:35:27 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:26.418 22:35:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:26.418 22:35:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:26.418 22:35:27 -- common/autotest_common.sh@10 -- # set +x 00:15:26.418 ************************************ 00:15:26.418 START TEST nvmf_zcopy 00:15:26.418 ************************************ 00:15:26.418 22:35:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:26.678 * Looking for test storage... 00:15:26.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:26.678 22:35:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:26.678 22:35:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:26.678 22:35:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:26.678 22:35:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:26.678 22:35:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:26.678 22:35:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:26.678 22:35:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:26.678 22:35:27 -- scripts/common.sh@335 -- # IFS=.-: 00:15:26.678 22:35:27 -- scripts/common.sh@335 -- # read -ra ver1 00:15:26.678 22:35:27 -- scripts/common.sh@336 -- # IFS=.-: 00:15:26.678 22:35:27 -- scripts/common.sh@336 -- # read -ra ver2 00:15:26.678 22:35:27 -- scripts/common.sh@337 -- # local 'op=<' 00:15:26.678 22:35:27 -- scripts/common.sh@339 -- # ver1_l=2 00:15:26.678 22:35:27 -- scripts/common.sh@340 -- # ver2_l=1 00:15:26.678 22:35:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:26.678 22:35:27 -- scripts/common.sh@343 -- # case "$op" in 00:15:26.678 22:35:27 -- scripts/common.sh@344 -- # : 1 00:15:26.678 22:35:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:26.678 22:35:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:26.678 22:35:27 -- scripts/common.sh@364 -- # decimal 1 00:15:26.678 22:35:27 -- scripts/common.sh@352 -- # local d=1 00:15:26.678 22:35:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:26.678 22:35:27 -- scripts/common.sh@354 -- # echo 1 00:15:26.678 22:35:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:26.678 22:35:27 -- scripts/common.sh@365 -- # decimal 2 00:15:26.678 22:35:27 -- scripts/common.sh@352 -- # local d=2 00:15:26.678 22:35:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:26.678 22:35:27 -- scripts/common.sh@354 -- # echo 2 00:15:26.678 22:35:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:26.678 22:35:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:26.678 22:35:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:26.678 22:35:27 -- scripts/common.sh@367 -- # return 0 00:15:26.678 22:35:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:26.678 22:35:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:26.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.678 --rc genhtml_branch_coverage=1 00:15:26.678 --rc genhtml_function_coverage=1 00:15:26.678 --rc genhtml_legend=1 00:15:26.678 --rc geninfo_all_blocks=1 00:15:26.678 --rc geninfo_unexecuted_blocks=1 00:15:26.678 00:15:26.678 ' 00:15:26.678 22:35:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:26.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.678 --rc genhtml_branch_coverage=1 00:15:26.678 --rc genhtml_function_coverage=1 00:15:26.678 --rc genhtml_legend=1 00:15:26.678 --rc geninfo_all_blocks=1 00:15:26.678 --rc geninfo_unexecuted_blocks=1 00:15:26.678 00:15:26.678 ' 00:15:26.678 22:35:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:26.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.678 --rc genhtml_branch_coverage=1 00:15:26.678 --rc genhtml_function_coverage=1 00:15:26.678 --rc genhtml_legend=1 00:15:26.678 --rc geninfo_all_blocks=1 00:15:26.678 --rc geninfo_unexecuted_blocks=1 00:15:26.678 00:15:26.678 ' 00:15:26.678 22:35:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:26.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.678 --rc genhtml_branch_coverage=1 00:15:26.678 --rc genhtml_function_coverage=1 00:15:26.678 --rc genhtml_legend=1 00:15:26.678 --rc geninfo_all_blocks=1 00:15:26.678 --rc geninfo_unexecuted_blocks=1 00:15:26.678 00:15:26.678 ' 00:15:26.678 22:35:27 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:26.678 22:35:27 -- nvmf/common.sh@7 -- # uname -s 00:15:26.678 22:35:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.678 22:35:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.678 22:35:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.678 22:35:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.678 22:35:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.678 22:35:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.678 22:35:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.678 22:35:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.678 22:35:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.678 22:35:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.678 22:35:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:15:26.678 22:35:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:15:26.678 22:35:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.678 22:35:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.678 22:35:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:26.678 22:35:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:26.678 22:35:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.678 22:35:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.678 22:35:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.678 22:35:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.678 22:35:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.678 22:35:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.678 22:35:27 -- paths/export.sh@5 -- # export PATH 00:15:26.678 22:35:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.678 22:35:27 -- nvmf/common.sh@46 -- # : 0 00:15:26.678 22:35:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:26.678 22:35:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:26.678 22:35:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:26.678 22:35:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.678 22:35:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.678 22:35:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:26.678 22:35:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:26.678 22:35:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:26.678 22:35:27 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:26.678 22:35:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:26.678 22:35:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:26.678 22:35:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:26.678 22:35:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:26.678 22:35:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:26.678 22:35:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.678 22:35:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.678 22:35:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.678 22:35:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:26.678 22:35:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:26.678 22:35:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:26.678 22:35:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:26.678 22:35:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:26.678 22:35:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:26.678 22:35:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:26.678 22:35:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:26.678 22:35:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:26.678 22:35:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:26.678 22:35:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:26.678 22:35:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:26.678 22:35:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:26.678 22:35:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:26.678 22:35:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:26.678 22:35:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:26.678 22:35:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:26.678 22:35:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:26.678 22:35:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:26.678 22:35:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:26.678 Cannot find device "nvmf_tgt_br" 00:15:26.678 22:35:27 -- nvmf/common.sh@154 -- # true 00:15:26.678 22:35:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:26.678 Cannot find device "nvmf_tgt_br2" 00:15:26.678 22:35:27 -- nvmf/common.sh@155 -- # true 00:15:26.678 22:35:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:26.678 22:35:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:26.678 Cannot find device "nvmf_tgt_br" 00:15:26.678 22:35:27 -- nvmf/common.sh@157 -- # true 00:15:26.678 22:35:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:26.678 Cannot find device "nvmf_tgt_br2" 00:15:26.679 22:35:27 -- nvmf/common.sh@158 -- # true 00:15:26.679 22:35:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:26.937 22:35:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:26.937 22:35:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:26.937 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:26.937 22:35:27 -- nvmf/common.sh@161 -- # true 00:15:26.937 22:35:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:26.937 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:26.937 22:35:27 -- nvmf/common.sh@162 -- # true 00:15:26.937 22:35:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:26.937 22:35:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:26.937 22:35:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:26.937 22:35:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:26.937 22:35:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:26.937 22:35:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:26.937 22:35:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:26.937 22:35:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:26.937 22:35:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:26.937 22:35:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:26.937 22:35:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:26.937 22:35:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:26.937 22:35:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:26.937 22:35:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:26.937 22:35:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:26.937 22:35:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:26.937 22:35:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:26.937 22:35:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:26.937 22:35:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:26.937 22:35:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:26.937 22:35:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:26.937 22:35:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:26.937 22:35:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:26.937 22:35:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:26.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:26.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:15:26.937 00:15:26.937 --- 10.0.0.2 ping statistics --- 00:15:26.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.937 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:15:26.937 22:35:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:26.937 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:26.937 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:15:26.937 00:15:26.937 --- 10.0.0.3 ping statistics --- 00:15:26.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.937 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:15:26.937 22:35:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:26.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:26.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:26.937 00:15:26.937 --- 10.0.0.1 ping statistics --- 00:15:26.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.938 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:26.938 22:35:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:26.938 22:35:27 -- nvmf/common.sh@421 -- # return 0 00:15:26.938 22:35:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:26.938 22:35:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:26.938 22:35:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:26.938 22:35:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:26.938 22:35:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:26.938 22:35:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:26.938 22:35:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:27.196 22:35:27 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:27.196 22:35:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:27.196 22:35:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:27.196 22:35:27 -- common/autotest_common.sh@10 -- # set +x 00:15:27.196 22:35:27 -- nvmf/common.sh@469 -- # nvmfpid=85989 00:15:27.196 22:35:27 -- nvmf/common.sh@470 -- # waitforlisten 85989 00:15:27.196 22:35:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:27.196 22:35:27 -- common/autotest_common.sh@829 -- # '[' -z 85989 ']' 00:15:27.196 22:35:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.196 22:35:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:27.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.196 22:35:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.196 22:35:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:27.196 22:35:27 -- common/autotest_common.sh@10 -- # set +x 00:15:27.196 [2024-11-20 22:35:27.747000] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:27.196 [2024-11-20 22:35:27.747092] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.196 [2024-11-20 22:35:27.883072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.455 [2024-11-20 22:35:27.954946] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:27.455 [2024-11-20 22:35:27.955084] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:27.455 [2024-11-20 22:35:27.955095] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:27.455 [2024-11-20 22:35:27.955102] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:27.455 [2024-11-20 22:35:27.955127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.022 22:35:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:28.022 22:35:28 -- common/autotest_common.sh@862 -- # return 0 00:15:28.022 22:35:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:28.022 22:35:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:28.022 22:35:28 -- common/autotest_common.sh@10 -- # set +x 00:15:28.022 22:35:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.022 22:35:28 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:28.022 22:35:28 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:28.022 22:35:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.022 22:35:28 -- common/autotest_common.sh@10 -- # set +x 00:15:28.022 [2024-11-20 22:35:28.704887] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:28.022 22:35:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.022 22:35:28 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:28.022 22:35:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.022 22:35:28 -- common/autotest_common.sh@10 -- # set +x 00:15:28.022 22:35:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.023 22:35:28 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:28.023 22:35:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.023 22:35:28 -- common/autotest_common.sh@10 -- # set +x 00:15:28.023 [2024-11-20 22:35:28.721033] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:28.023 22:35:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.023 22:35:28 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:28.023 22:35:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.023 22:35:28 -- common/autotest_common.sh@10 -- # set +x 00:15:28.023 22:35:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.023 22:35:28 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:28.023 22:35:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.023 22:35:28 -- common/autotest_common.sh@10 -- # set +x 00:15:28.023 malloc0 00:15:28.023 22:35:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.023 22:35:28 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:28.023 22:35:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.023 22:35:28 -- common/autotest_common.sh@10 -- # set +x 00:15:28.281 22:35:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.281 22:35:28 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:28.281 22:35:28 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:28.281 22:35:28 -- nvmf/common.sh@520 -- # config=() 00:15:28.281 22:35:28 -- nvmf/common.sh@520 -- # local subsystem config 00:15:28.281 22:35:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:28.281 22:35:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:28.281 { 00:15:28.281 "params": { 00:15:28.281 "name": "Nvme$subsystem", 00:15:28.281 "trtype": "$TEST_TRANSPORT", 00:15:28.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:28.281 "adrfam": "ipv4", 00:15:28.281 "trsvcid": "$NVMF_PORT", 00:15:28.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:28.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:28.281 "hdgst": ${hdgst:-false}, 00:15:28.281 "ddgst": ${ddgst:-false} 00:15:28.281 }, 00:15:28.281 "method": "bdev_nvme_attach_controller" 00:15:28.281 } 00:15:28.281 EOF 00:15:28.281 )") 00:15:28.281 22:35:28 -- nvmf/common.sh@542 -- # cat 00:15:28.281 22:35:28 -- nvmf/common.sh@544 -- # jq . 00:15:28.281 22:35:28 -- nvmf/common.sh@545 -- # IFS=, 00:15:28.281 22:35:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:28.281 "params": { 00:15:28.281 "name": "Nvme1", 00:15:28.281 "trtype": "tcp", 00:15:28.281 "traddr": "10.0.0.2", 00:15:28.281 "adrfam": "ipv4", 00:15:28.281 "trsvcid": "4420", 00:15:28.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:28.281 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:28.281 "hdgst": false, 00:15:28.281 "ddgst": false 00:15:28.281 }, 00:15:28.281 "method": "bdev_nvme_attach_controller" 00:15:28.281 }' 00:15:28.281 [2024-11-20 22:35:28.812943] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:28.281 [2024-11-20 22:35:28.813022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86040 ] 00:15:28.281 [2024-11-20 22:35:28.951009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.541 [2024-11-20 22:35:29.026918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.541 Running I/O for 10 seconds... 00:15:38.515 00:15:38.515 Latency(us) 00:15:38.515 [2024-11-20T22:35:39.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.515 [2024-11-20T22:35:39.249Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:38.515 Verification LBA range: start 0x0 length 0x1000 00:15:38.515 Nvme1n1 : 10.01 11342.66 88.61 0.00 0.00 11257.89 755.90 22043.93 00:15:38.515 [2024-11-20T22:35:39.249Z] =================================================================================================================== 00:15:38.515 [2024-11-20T22:35:39.249Z] Total : 11342.66 88.61 0.00 0.00 11257.89 755.90 22043.93 00:15:38.773 22:35:39 -- target/zcopy.sh@39 -- # perfpid=86163 00:15:38.773 22:35:39 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:38.773 22:35:39 -- common/autotest_common.sh@10 -- # set +x 00:15:38.773 22:35:39 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:38.773 22:35:39 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:38.773 22:35:39 -- nvmf/common.sh@520 -- # config=() 00:15:38.773 22:35:39 -- nvmf/common.sh@520 -- # local subsystem config 00:15:38.773 22:35:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:38.773 22:35:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:38.773 { 00:15:38.773 "params": { 00:15:38.773 "name": "Nvme$subsystem", 00:15:38.773 "trtype": "$TEST_TRANSPORT", 00:15:38.773 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:38.773 "adrfam": "ipv4", 00:15:38.773 "trsvcid": "$NVMF_PORT", 00:15:38.773 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:38.773 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:38.773 "hdgst": ${hdgst:-false}, 00:15:38.773 "ddgst": ${ddgst:-false} 00:15:38.773 }, 00:15:38.773 "method": "bdev_nvme_attach_controller" 00:15:38.773 } 00:15:38.773 EOF 00:15:38.773 )") 00:15:38.773 22:35:39 -- nvmf/common.sh@542 -- # cat 00:15:38.773 [2024-11-20 22:35:39.495887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.773 [2024-11-20 22:35:39.495939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.773 22:35:39 -- nvmf/common.sh@544 -- # jq . 00:15:38.773 22:35:39 -- nvmf/common.sh@545 -- # IFS=, 00:15:38.773 22:35:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:38.773 "params": { 00:15:38.773 "name": "Nvme1", 00:15:38.773 "trtype": "tcp", 00:15:38.773 "traddr": "10.0.0.2", 00:15:38.773 "adrfam": "ipv4", 00:15:38.773 "trsvcid": "4420", 00:15:38.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:38.774 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:38.774 "hdgst": false, 00:15:38.774 "ddgst": false 00:15:38.774 }, 00:15:38.774 "method": "bdev_nvme_attach_controller" 00:15:38.774 }' 00:15:38.774 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.033 [2024-11-20 22:35:39.507850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.033 [2024-11-20 22:35:39.507876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.033 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.033 [2024-11-20 22:35:39.519853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.033 [2024-11-20 22:35:39.519890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.033 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.033 [2024-11-20 22:35:39.527085] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:39.033 [2024-11-20 22:35:39.527185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86163 ] 00:15:39.033 [2024-11-20 22:35:39.531856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.033 [2024-11-20 22:35:39.531893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.033 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.033 [2024-11-20 22:35:39.543860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.033 [2024-11-20 22:35:39.543897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.033 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.033 [2024-11-20 22:35:39.555843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.033 [2024-11-20 22:35:39.555879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.033 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.033 [2024-11-20 22:35:39.567848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.033 [2024-11-20 22:35:39.567883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.033 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.033 [2024-11-20 22:35:39.579851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.033 [2024-11-20 22:35:39.579887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.033 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.033 [2024-11-20 22:35:39.591855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.033 [2024-11-20 22:35:39.591890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.033 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.033 [2024-11-20 22:35:39.603858] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.033 [2024-11-20 22:35:39.603893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.033 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.033 [2024-11-20 22:35:39.615860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.033 [2024-11-20 22:35:39.615895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.033 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.033 [2024-11-20 22:35:39.627864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.033 [2024-11-20 22:35:39.627899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.033 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.033 [2024-11-20 22:35:39.639881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.033 [2024-11-20 22:35:39.639919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.033 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.033 [2024-11-20 22:35:39.651870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.033 [2024-11-20 22:35:39.651905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.033 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.033 [2024-11-20 22:35:39.658171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.033 [2024-11-20 22:35:39.663875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.033 [2024-11-20 22:35:39.663913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.033 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.033 [2024-11-20 22:35:39.675876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.033 [2024-11-20 22:35:39.675911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.033 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.033 [2024-11-20 22:35:39.687880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.033 [2024-11-20 22:35:39.687914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.033 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.033 [2024-11-20 22:35:39.699885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.033 [2024-11-20 22:35:39.699919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.033 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.033 [2024-11-20 22:35:39.711890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.033 [2024-11-20 22:35:39.711925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.033 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.033 [2024-11-20 22:35:39.718659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.033 [2024-11-20 22:35:39.723892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.033 [2024-11-20 22:35:39.723927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.033 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.033 [2024-11-20 22:35:39.735892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.033 [2024-11-20 22:35:39.735927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.034 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.034 [2024-11-20 22:35:39.747896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.034 [2024-11-20 22:35:39.747931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.034 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.034 [2024-11-20 22:35:39.759919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.034 [2024-11-20 22:35:39.759940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.293 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.293 [2024-11-20 22:35:39.771912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.293 [2024-11-20 22:35:39.771946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.293 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.293 [2024-11-20 22:35:39.783914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.293 [2024-11-20 22:35:39.783949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.293 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.293 [2024-11-20 22:35:39.795917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.293 [2024-11-20 22:35:39.795952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.293 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.293 [2024-11-20 22:35:39.807923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.293 [2024-11-20 22:35:39.807958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.293 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.293 [2024-11-20 22:35:39.819923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.293 [2024-11-20 22:35:39.819957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.293 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.293 [2024-11-20 22:35:39.831928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.293 [2024-11-20 22:35:39.831963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.293 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.293 [2024-11-20 22:35:39.843928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.293 [2024-11-20 22:35:39.843963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.293 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.293 [2024-11-20 22:35:39.855954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.293 [2024-11-20 22:35:39.855995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.293 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.293 [2024-11-20 22:35:39.867952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.293 [2024-11-20 22:35:39.867991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.293 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.293 [2024-11-20 22:35:39.879957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.293 [2024-11-20 22:35:39.879995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.293 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.293 [2024-11-20 22:35:39.891977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.293 [2024-11-20 22:35:39.892016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.293 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.293 [2024-11-20 22:35:39.903974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.293 [2024-11-20 22:35:39.904011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.293 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.293 [2024-11-20 22:35:39.915967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.293 [2024-11-20 22:35:39.916006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.293 Running I/O for 5 seconds... 00:15:39.293 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.293 [2024-11-20 22:35:39.932723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.293 [2024-11-20 22:35:39.932750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.293 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.293 [2024-11-20 22:35:39.949435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.293 [2024-11-20 22:35:39.949479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.293 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.293 [2024-11-20 22:35:39.966012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.293 [2024-11-20 22:35:39.966039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.293 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.293 [2024-11-20 22:35:39.982897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.293 [2024-11-20 22:35:39.982925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.293 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.294 [2024-11-20 22:35:39.998769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.294 [2024-11-20 22:35:39.998797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.294 2024/11/20 22:35:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.294 [2024-11-20 22:35:40.011042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.294 [2024-11-20 22:35:40.011085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.294 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.553 [2024-11-20 22:35:40.028662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.553 [2024-11-20 22:35:40.028689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.553 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.553 [2024-11-20 22:35:40.043472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.553 [2024-11-20 22:35:40.043514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.553 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.553 [2024-11-20 22:35:40.060116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.553 [2024-11-20 22:35:40.060144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.553 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.553 [2024-11-20 22:35:40.076891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.553 [2024-11-20 22:35:40.076919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.553 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.553 [2024-11-20 22:35:40.093028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.553 [2024-11-20 22:35:40.093056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.553 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.553 [2024-11-20 22:35:40.109190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.553 [2024-11-20 22:35:40.109256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.553 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.553 [2024-11-20 22:35:40.125490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.553 [2024-11-20 22:35:40.125534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.553 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.553 [2024-11-20 22:35:40.136829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.553 [2024-11-20 22:35:40.136855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.553 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.553 [2024-11-20 22:35:40.152443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.553 [2024-11-20 22:35:40.152468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.553 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.553 [2024-11-20 22:35:40.168891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.553 [2024-11-20 22:35:40.168919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.553 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.553 [2024-11-20 22:35:40.184844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.553 [2024-11-20 22:35:40.184872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.553 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.553 [2024-11-20 22:35:40.201113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.553 [2024-11-20 22:35:40.201140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.553 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.553 [2024-11-20 22:35:40.217302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.553 [2024-11-20 22:35:40.217344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.553 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.553 [2024-11-20 22:35:40.233873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.553 [2024-11-20 22:35:40.233900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.553 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.553 [2024-11-20 22:35:40.250301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.553 [2024-11-20 22:35:40.250328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.553 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.553 [2024-11-20 22:35:40.266275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.553 [2024-11-20 22:35:40.266312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.553 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.553 [2024-11-20 22:35:40.283137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.553 [2024-11-20 22:35:40.283164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.813 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.813 [2024-11-20 22:35:40.299708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.813 [2024-11-20 22:35:40.299735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.813 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.813 [2024-11-20 22:35:40.315860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.813 [2024-11-20 22:35:40.315886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.813 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.813 [2024-11-20 22:35:40.332139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.813 [2024-11-20 22:35:40.332181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.813 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.813 [2024-11-20 22:35:40.348381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.813 [2024-11-20 22:35:40.348414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.813 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.813 [2024-11-20 22:35:40.366022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.813 [2024-11-20 22:35:40.366049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.813 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.813 [2024-11-20 22:35:40.381453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.813 [2024-11-20 22:35:40.381481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.813 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.813 [2024-11-20 22:35:40.398076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.813 [2024-11-20 22:35:40.398102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.813 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.813 [2024-11-20 22:35:40.414431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.813 [2024-11-20 22:35:40.414459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.813 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.813 [2024-11-20 22:35:40.431006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.813 [2024-11-20 22:35:40.431050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.813 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.813 [2024-11-20 22:35:40.442942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.813 [2024-11-20 22:35:40.442984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.813 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.813 [2024-11-20 22:35:40.452044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.813 [2024-11-20 22:35:40.452087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.813 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.813 [2024-11-20 22:35:40.466591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.813 [2024-11-20 22:35:40.466621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.813 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.813 [2024-11-20 22:35:40.477761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.813 [2024-11-20 22:35:40.477804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.813 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.813 [2024-11-20 22:35:40.485712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.813 [2024-11-20 22:35:40.485754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.813 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.813 [2024-11-20 22:35:40.502135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.813 [2024-11-20 22:35:40.502177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.813 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.813 [2024-11-20 22:35:40.513371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.813 [2024-11-20 22:35:40.513399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.813 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.813 [2024-11-20 22:35:40.529471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.813 [2024-11-20 22:35:40.529514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.813 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.813 [2024-11-20 22:35:40.539922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.813 [2024-11-20 22:35:40.539965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.813 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.073 [2024-11-20 22:35:40.556296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.073 [2024-11-20 22:35:40.556351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.073 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.073 [2024-11-20 22:35:40.572868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.074 [2024-11-20 22:35:40.572895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.074 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.074 [2024-11-20 22:35:40.589739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.074 [2024-11-20 22:35:40.589766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.074 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.074 [2024-11-20 22:35:40.606510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.074 [2024-11-20 22:35:40.606552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.074 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.074 [2024-11-20 22:35:40.622582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.074 [2024-11-20 22:35:40.622625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.074 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.074 [2024-11-20 22:35:40.634776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.074 [2024-11-20 22:35:40.634802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.074 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.074 [2024-11-20 22:35:40.650306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.074 [2024-11-20 22:35:40.650348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.074 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.074 [2024-11-20 22:35:40.667426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.074 [2024-11-20 22:35:40.667453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.074 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.074 [2024-11-20 22:35:40.683114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.074 [2024-11-20 22:35:40.683141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.074 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.074 [2024-11-20 22:35:40.699482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.074 [2024-11-20 22:35:40.699508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.074 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.074 [2024-11-20 22:35:40.715792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.074 [2024-11-20 22:35:40.715819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.074 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.074 [2024-11-20 22:35:40.732044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.074 [2024-11-20 22:35:40.732071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.074 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.074 [2024-11-20 22:35:40.744104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.074 [2024-11-20 22:35:40.744300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.074 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.074 [2024-11-20 22:35:40.760113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.074 [2024-11-20 22:35:40.760298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.074 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.074 [2024-11-20 22:35:40.776355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.074 [2024-11-20 22:35:40.776386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.074 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.074 [2024-11-20 22:35:40.793290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.074 [2024-11-20 22:35:40.793339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.074 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.334 [2024-11-20 22:35:40.810211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.334 [2024-11-20 22:35:40.810242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.334 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.334 [2024-11-20 22:35:40.826885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.334 [2024-11-20 22:35:40.827032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.334 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.334 [2024-11-20 22:35:40.842918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.334 [2024-11-20 22:35:40.842949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.334 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.334 [2024-11-20 22:35:40.859190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.334 [2024-11-20 22:35:40.859221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.334 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.334 [2024-11-20 22:35:40.875776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.334 [2024-11-20 22:35:40.875807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.334 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.334 [2024-11-20 22:35:40.887631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.334 [2024-11-20 22:35:40.887663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.334 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.334 [2024-11-20 22:35:40.903051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.334 [2024-11-20 22:35:40.903083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.334 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.334 [2024-11-20 22:35:40.919422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.334 [2024-11-20 22:35:40.919452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.334 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.334 [2024-11-20 22:35:40.935682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.334 [2024-11-20 22:35:40.935713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.334 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.334 [2024-11-20 22:35:40.951601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.334 [2024-11-20 22:35:40.951632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.334 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.334 [2024-11-20 22:35:40.964839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.334 [2024-11-20 22:35:40.964870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.334 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.334 [2024-11-20 22:35:40.979738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.334 [2024-11-20 22:35:40.979771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.334 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.334 [2024-11-20 22:35:40.992255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.334 [2024-11-20 22:35:40.992300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.334 2024/11/20 22:35:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.334 [2024-11-20 22:35:41.003798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.334 [2024-11-20 22:35:41.003829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.334 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.334 [2024-11-20 22:35:41.019126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.334 [2024-11-20 22:35:41.019158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.334 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.334 [2024-11-20 22:35:41.034912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.334 [2024-11-20 22:35:41.034943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.334 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.334 [2024-11-20 22:35:41.048069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.334 [2024-11-20 22:35:41.048102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.334 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.334 [2024-11-20 22:35:41.063158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.334 [2024-11-20 22:35:41.063192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.595 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.595 [2024-11-20 22:35:41.079890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.595 [2024-11-20 22:35:41.079921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.595 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.595 [2024-11-20 22:35:41.096076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.595 [2024-11-20 22:35:41.096108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.595 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.595 [2024-11-20 22:35:41.112286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.595 [2024-11-20 22:35:41.112315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.595 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.595 [2024-11-20 22:35:41.123944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.595 [2024-11-20 22:35:41.123975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.595 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.595 [2024-11-20 22:35:41.138515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.595 [2024-11-20 22:35:41.138546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.595 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.595 [2024-11-20 22:35:41.152795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.595 [2024-11-20 22:35:41.152827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.595 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.595 [2024-11-20 22:35:41.167806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.595 [2024-11-20 22:35:41.167837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.595 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.595 [2024-11-20 22:35:41.183624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.595 [2024-11-20 22:35:41.183655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.595 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.595 [2024-11-20 22:35:41.198149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.595 [2024-11-20 22:35:41.198319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.595 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.595 [2024-11-20 22:35:41.214800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.595 [2024-11-20 22:35:41.214833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.595 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.595 [2024-11-20 22:35:41.230683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.595 [2024-11-20 22:35:41.230714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.595 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.595 [2024-11-20 22:35:41.242507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.595 [2024-11-20 22:35:41.242538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.595 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.595 [2024-11-20 22:35:41.257573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.595 [2024-11-20 22:35:41.257604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.595 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.595 [2024-11-20 22:35:41.273778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.595 [2024-11-20 22:35:41.273809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.595 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.595 [2024-11-20 22:35:41.289784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.595 [2024-11-20 22:35:41.289816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.595 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.595 [2024-11-20 22:35:41.300701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.595 [2024-11-20 22:35:41.300848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.595 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.595 [2024-11-20 22:35:41.317179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.595 [2024-11-20 22:35:41.317221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.595 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.855 [2024-11-20 22:35:41.333365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.855 [2024-11-20 22:35:41.333400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.855 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.855 [2024-11-20 22:35:41.345478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.855 [2024-11-20 22:35:41.345512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.855 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.855 [2024-11-20 22:35:41.361921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.855 [2024-11-20 22:35:41.361952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.855 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.855 [2024-11-20 22:35:41.377525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.855 [2024-11-20 22:35:41.377559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.855 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.855 [2024-11-20 22:35:41.389162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.855 [2024-11-20 22:35:41.389193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.855 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.855 [2024-11-20 22:35:41.405215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.855 [2024-11-20 22:35:41.405263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.855 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.855 [2024-11-20 22:35:41.421239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.855 [2024-11-20 22:35:41.421272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.855 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.855 [2024-11-20 22:35:41.437621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.855 [2024-11-20 22:35:41.437652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.855 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.855 [2024-11-20 22:35:41.453496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.855 [2024-11-20 22:35:41.453543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.855 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.855 [2024-11-20 22:35:41.465752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.855 [2024-11-20 22:35:41.465783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.855 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.855 [2024-11-20 22:35:41.480722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.855 [2024-11-20 22:35:41.480753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.855 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.855 [2024-11-20 22:35:41.497696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.855 [2024-11-20 22:35:41.497726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.855 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.855 [2024-11-20 22:35:41.513759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.856 [2024-11-20 22:35:41.513792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.856 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.856 [2024-11-20 22:35:41.530482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.856 [2024-11-20 22:35:41.530513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.856 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.856 [2024-11-20 22:35:41.546645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.856 [2024-11-20 22:35:41.546676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.856 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.856 [2024-11-20 22:35:41.562715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.856 [2024-11-20 22:35:41.562746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.856 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.856 [2024-11-20 22:35:41.578207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.856 [2024-11-20 22:35:41.578238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.856 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.115 [2024-11-20 22:35:41.592442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.115 [2024-11-20 22:35:41.592473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.115 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.115 [2024-11-20 22:35:41.608780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.115 [2024-11-20 22:35:41.608811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.115 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.115 [2024-11-20 22:35:41.626176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.115 [2024-11-20 22:35:41.626207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.115 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.115 [2024-11-20 22:35:41.642594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.115 [2024-11-20 22:35:41.642625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.115 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.115 [2024-11-20 22:35:41.658872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.115 [2024-11-20 22:35:41.658903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.115 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.115 [2024-11-20 22:35:41.675294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.115 [2024-11-20 22:35:41.675323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.115 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.115 [2024-11-20 22:35:41.692288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.115 [2024-11-20 22:35:41.692318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.115 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.115 [2024-11-20 22:35:41.708271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.115 [2024-11-20 22:35:41.708312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.115 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.115 [2024-11-20 22:35:41.724934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.115 [2024-11-20 22:35:41.724966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.115 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.115 [2024-11-20 22:35:41.741720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.115 [2024-11-20 22:35:41.741751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.115 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.115 [2024-11-20 22:35:41.758001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.115 [2024-11-20 22:35:41.758032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.115 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.115 [2024-11-20 22:35:41.774578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.115 [2024-11-20 22:35:41.774609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.115 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.115 [2024-11-20 22:35:41.791736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.115 [2024-11-20 22:35:41.791768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.115 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.115 [2024-11-20 22:35:41.807639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.115 [2024-11-20 22:35:41.807670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.115 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.115 [2024-11-20 22:35:41.823634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.115 [2024-11-20 22:35:41.823667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.115 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.115 [2024-11-20 22:35:41.835682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.115 [2024-11-20 22:35:41.835714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.115 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.375 [2024-11-20 22:35:41.849669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.375 [2024-11-20 22:35:41.849896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.375 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.375 [2024-11-20 22:35:41.864995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.375 [2024-11-20 22:35:41.865139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.375 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.375 [2024-11-20 22:35:41.881316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.375 [2024-11-20 22:35:41.881348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.375 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.375 [2024-11-20 22:35:41.892096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.375 [2024-11-20 22:35:41.892127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.375 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.375 [2024-11-20 22:35:41.908859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.375 [2024-11-20 22:35:41.909001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.375 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.375 [2024-11-20 22:35:41.925213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.375 [2024-11-20 22:35:41.925262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.375 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.375 [2024-11-20 22:35:41.941757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.375 [2024-11-20 22:35:41.941789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.375 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.375 [2024-11-20 22:35:41.958084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.375 [2024-11-20 22:35:41.958115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.376 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.376 [2024-11-20 22:35:41.974171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.376 [2024-11-20 22:35:41.974202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.376 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.376 [2024-11-20 22:35:41.990847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.376 [2024-11-20 22:35:41.990878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.376 2024/11/20 22:35:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.376 [2024-11-20 22:35:42.007068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.376 [2024-11-20 22:35:42.007100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.376 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.376 [2024-11-20 22:35:42.023568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.376 [2024-11-20 22:35:42.023599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.376 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.376 [2024-11-20 22:35:42.039670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.376 [2024-11-20 22:35:42.039702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.376 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.376 [2024-11-20 22:35:42.051334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.376 [2024-11-20 22:35:42.051364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.376 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.376 [2024-11-20 22:35:42.066834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.376 [2024-11-20 22:35:42.066980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.376 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.376 [2024-11-20 22:35:42.083153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.376 [2024-11-20 22:35:42.083336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.376 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.376 [2024-11-20 22:35:42.099888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.376 [2024-11-20 22:35:42.100033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.376 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.635 [2024-11-20 22:35:42.115512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.635 [2024-11-20 22:35:42.115671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.635 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.635 [2024-11-20 22:35:42.130975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.635 [2024-11-20 22:35:42.131118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.635 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.635 [2024-11-20 22:35:42.147705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.635 [2024-11-20 22:35:42.147847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.636 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.636 [2024-11-20 22:35:42.164200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.636 [2024-11-20 22:35:42.164357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.636 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.636 [2024-11-20 22:35:42.180287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.636 [2024-11-20 22:35:42.180426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.636 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.636 [2024-11-20 22:35:42.197627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.636 [2024-11-20 22:35:42.197660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.636 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.636 [2024-11-20 22:35:42.213862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.636 [2024-11-20 22:35:42.213893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.636 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.636 [2024-11-20 22:35:42.230093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.636 [2024-11-20 22:35:42.230125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.636 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.636 [2024-11-20 22:35:42.246647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.636 [2024-11-20 22:35:42.246694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.636 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.636 [2024-11-20 22:35:42.262925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.636 [2024-11-20 22:35:42.262956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.636 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.636 [2024-11-20 22:35:42.279626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.636 [2024-11-20 22:35:42.279675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.636 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.636 [2024-11-20 22:35:42.289729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.636 [2024-11-20 22:35:42.289779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.636 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.636 [2024-11-20 22:35:42.303157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.636 [2024-11-20 22:35:42.303189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.636 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.636 [2024-11-20 22:35:42.319454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.636 [2024-11-20 22:35:42.319485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.636 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.636 [2024-11-20 22:35:42.330310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.636 [2024-11-20 22:35:42.330352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.636 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.636 [2024-11-20 22:35:42.346074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.636 [2024-11-20 22:35:42.346106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.636 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.636 [2024-11-20 22:35:42.362535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.636 [2024-11-20 22:35:42.362569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.636 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.896 [2024-11-20 22:35:42.377035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.896 [2024-11-20 22:35:42.377243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.896 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.896 [2024-11-20 22:35:42.392837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.896 [2024-11-20 22:35:42.392869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.896 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.896 [2024-11-20 22:35:42.403924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.896 [2024-11-20 22:35:42.404068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.896 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.896 [2024-11-20 22:35:42.419588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.896 [2024-11-20 22:35:42.419619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.896 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.896 [2024-11-20 22:35:42.435623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.896 [2024-11-20 22:35:42.435654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.896 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.896 [2024-11-20 22:35:42.447602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.896 [2024-11-20 22:35:42.447636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.896 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.896 [2024-11-20 22:35:42.462881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.896 [2024-11-20 22:35:42.463025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.896 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.896 [2024-11-20 22:35:42.479495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.896 [2024-11-20 22:35:42.479528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.896 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.896 [2024-11-20 22:35:42.495216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.896 [2024-11-20 22:35:42.495247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.896 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.896 [2024-11-20 22:35:42.510187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.896 [2024-11-20 22:35:42.510341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.896 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.896 [2024-11-20 22:35:42.526732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.896 [2024-11-20 22:35:42.526763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.896 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.896 [2024-11-20 22:35:42.543795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.896 [2024-11-20 22:35:42.543826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.896 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.896 [2024-11-20 22:35:42.559705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.896 [2024-11-20 22:35:42.559736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.896 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.896 [2024-11-20 22:35:42.575880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.896 [2024-11-20 22:35:42.575911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.896 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.896 [2024-11-20 22:35:42.592458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.896 [2024-11-20 22:35:42.592488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.896 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.896 [2024-11-20 22:35:42.608439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.896 [2024-11-20 22:35:42.608470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.896 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.896 [2024-11-20 22:35:42.620001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.896 [2024-11-20 22:35:42.620032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.897 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.155 [2024-11-20 22:35:42.635329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.155 [2024-11-20 22:35:42.635359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.155 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.155 [2024-11-20 22:35:42.647995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.155 [2024-11-20 22:35:42.648026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.155 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.156 [2024-11-20 22:35:42.663818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.156 [2024-11-20 22:35:42.663850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.156 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.156 [2024-11-20 22:35:42.677284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.156 [2024-11-20 22:35:42.677326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.156 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.156 [2024-11-20 22:35:42.693436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.156 [2024-11-20 22:35:42.693468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.156 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.156 [2024-11-20 22:35:42.709504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.156 [2024-11-20 22:35:42.709535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.156 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.156 [2024-11-20 22:35:42.725452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.156 [2024-11-20 22:35:42.725484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.156 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.156 [2024-11-20 22:35:42.742468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.156 [2024-11-20 22:35:42.742501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.156 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.156 [2024-11-20 22:35:42.759786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.156 [2024-11-20 22:35:42.759820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.156 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.156 [2024-11-20 22:35:42.774639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.156 [2024-11-20 22:35:42.774685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.156 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.156 [2024-11-20 22:35:42.792519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.156 [2024-11-20 22:35:42.792546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.156 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.156 [2024-11-20 22:35:42.807206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.156 [2024-11-20 22:35:42.807397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.156 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.156 [2024-11-20 22:35:42.822999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.156 [2024-11-20 22:35:42.823143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.156 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.156 [2024-11-20 22:35:42.839765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.156 [2024-11-20 22:35:42.839797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.156 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.156 [2024-11-20 22:35:42.856464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.156 [2024-11-20 22:35:42.856495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.156 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.156 [2024-11-20 22:35:42.872694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.156 [2024-11-20 22:35:42.872725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.156 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.415 [2024-11-20 22:35:42.888884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.415 [2024-11-20 22:35:42.888917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.416 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.416 [2024-11-20 22:35:42.905525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.416 [2024-11-20 22:35:42.905588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.416 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.416 [2024-11-20 22:35:42.922228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.416 [2024-11-20 22:35:42.922259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.416 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.416 [2024-11-20 22:35:42.938807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.416 [2024-11-20 22:35:42.938839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.416 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.416 [2024-11-20 22:35:42.955301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.416 [2024-11-20 22:35:42.955332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.416 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.416 [2024-11-20 22:35:42.971462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.416 [2024-11-20 22:35:42.971494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.416 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.416 [2024-11-20 22:35:42.987363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.416 [2024-11-20 22:35:42.987393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.416 2024/11/20 22:35:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.416 [2024-11-20 22:35:43.002009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.416 [2024-11-20 22:35:43.002040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.416 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.416 [2024-11-20 22:35:43.012819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.416 [2024-11-20 22:35:43.012850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.416 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.416 [2024-11-20 22:35:43.027996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.416 [2024-11-20 22:35:43.028027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.416 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.416 [2024-11-20 22:35:43.044312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.416 [2024-11-20 22:35:43.044341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.416 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.416 [2024-11-20 22:35:43.061168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.416 [2024-11-20 22:35:43.061207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.416 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.416 [2024-11-20 22:35:43.076931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.416 [2024-11-20 22:35:43.076963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.416 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.416 [2024-11-20 22:35:43.093479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.416 [2024-11-20 22:35:43.093511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.416 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.416 [2024-11-20 22:35:43.109346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.416 [2024-11-20 22:35:43.109376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.416 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.416 [2024-11-20 22:35:43.123244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.416 [2024-11-20 22:35:43.123421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.416 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.416 [2024-11-20 22:35:43.138130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.416 [2024-11-20 22:35:43.138162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.416 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.675 [2024-11-20 22:35:43.152423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.675 [2024-11-20 22:35:43.152454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.675 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.675 [2024-11-20 22:35:43.167973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.675 [2024-11-20 22:35:43.168005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.675 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.675 [2024-11-20 22:35:43.183913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.675 [2024-11-20 22:35:43.183945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.675 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.675 [2024-11-20 22:35:43.200457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.675 [2024-11-20 22:35:43.200487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.675 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.675 [2024-11-20 22:35:43.217620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.675 [2024-11-20 22:35:43.217652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.675 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.675 [2024-11-20 22:35:43.234516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.675 [2024-11-20 22:35:43.234548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.675 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.675 [2024-11-20 22:35:43.250030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.675 [2024-11-20 22:35:43.250177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.675 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.675 [2024-11-20 22:35:43.261590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.675 [2024-11-20 22:35:43.261632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.675 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.675 [2024-11-20 22:35:43.276998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.675 [2024-11-20 22:35:43.277025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.675 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.675 [2024-11-20 22:35:43.292850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.675 [2024-11-20 22:35:43.292877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.675 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.675 [2024-11-20 22:35:43.309519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.675 [2024-11-20 22:35:43.309546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.675 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.675 [2024-11-20 22:35:43.325805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.675 [2024-11-20 22:35:43.325832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.675 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.675 [2024-11-20 22:35:43.341831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.675 [2024-11-20 22:35:43.341858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.675 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.675 [2024-11-20 22:35:43.358093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.675 [2024-11-20 22:35:43.358119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.675 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.675 [2024-11-20 22:35:43.374896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.675 [2024-11-20 22:35:43.374924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.675 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.675 [2024-11-20 22:35:43.391971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.675 [2024-11-20 22:35:43.392031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.675 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.935 [2024-11-20 22:35:43.408908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.935 [2024-11-20 22:35:43.408953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.935 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.935 [2024-11-20 22:35:43.423734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.935 [2024-11-20 22:35:43.423761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.935 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.935 [2024-11-20 22:35:43.438922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.935 [2024-11-20 22:35:43.438950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.935 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.935 [2024-11-20 22:35:43.453558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.935 [2024-11-20 22:35:43.453588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.935 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.935 [2024-11-20 22:35:43.463998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.935 [2024-11-20 22:35:43.464025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.935 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.935 [2024-11-20 22:35:43.479479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.935 [2024-11-20 22:35:43.479506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.935 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.935 [2024-11-20 22:35:43.495636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.935 [2024-11-20 22:35:43.495664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.935 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.935 [2024-11-20 22:35:43.512683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.935 [2024-11-20 22:35:43.512710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.935 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.935 [2024-11-20 22:35:43.529253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.935 [2024-11-20 22:35:43.529294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.935 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.935 [2024-11-20 22:35:43.545141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.935 [2024-11-20 22:35:43.545168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.935 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.935 [2024-11-20 22:35:43.561650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.935 [2024-11-20 22:35:43.561676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.935 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.935 [2024-11-20 22:35:43.577664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.935 [2024-11-20 22:35:43.577691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.935 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.935 [2024-11-20 22:35:43.592392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.935 [2024-11-20 22:35:43.592419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.935 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.935 [2024-11-20 22:35:43.604761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.935 [2024-11-20 22:35:43.604788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.935 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.935 [2024-11-20 22:35:43.620519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.935 [2024-11-20 22:35:43.620546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.935 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.935 [2024-11-20 22:35:43.636143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.935 [2024-11-20 22:35:43.636170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.935 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.935 [2024-11-20 22:35:43.646870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.935 [2024-11-20 22:35:43.646896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.935 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.935 [2024-11-20 22:35:43.662513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.935 [2024-11-20 22:35:43.662540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.194 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.194 [2024-11-20 22:35:43.679111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.194 [2024-11-20 22:35:43.679138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.194 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.194 [2024-11-20 22:35:43.695940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.194 [2024-11-20 22:35:43.695969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.194 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.194 [2024-11-20 22:35:43.712267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.194 [2024-11-20 22:35:43.712325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.194 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.194 [2024-11-20 22:35:43.727478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.194 [2024-11-20 22:35:43.727521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.194 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.194 [2024-11-20 22:35:43.741759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.194 [2024-11-20 22:35:43.741803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.194 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.194 [2024-11-20 22:35:43.757787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.194 [2024-11-20 22:35:43.757831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.194 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.194 [2024-11-20 22:35:43.774792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.194 [2024-11-20 22:35:43.774819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.195 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.195 [2024-11-20 22:35:43.790891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.195 [2024-11-20 22:35:43.790918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.195 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.195 [2024-11-20 22:35:43.807461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.195 [2024-11-20 22:35:43.807491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.195 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.195 [2024-11-20 22:35:43.824098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.195 [2024-11-20 22:35:43.824125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.195 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.195 [2024-11-20 22:35:43.840659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.195 [2024-11-20 22:35:43.840686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.195 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.195 [2024-11-20 22:35:43.856480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.195 [2024-11-20 22:35:43.856506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.195 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.195 [2024-11-20 22:35:43.868731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.195 [2024-11-20 22:35:43.868774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.195 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.195 [2024-11-20 22:35:43.883432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.195 [2024-11-20 22:35:43.883476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.195 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.195 [2024-11-20 22:35:43.898092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.195 [2024-11-20 22:35:43.898118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.195 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.195 [2024-11-20 22:35:43.914748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.195 [2024-11-20 22:35:43.914776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.195 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.453 [2024-11-20 22:35:43.931043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.453 [2024-11-20 22:35:43.931070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.453 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.454 [2024-11-20 22:35:43.947730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.454 [2024-11-20 22:35:43.947757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.454 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.454 [2024-11-20 22:35:43.963769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.454 [2024-11-20 22:35:43.963796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.454 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.454 [2024-11-20 22:35:43.980550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.454 [2024-11-20 22:35:43.980592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.454 2024/11/20 22:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.454 [2024-11-20 22:35:43.997015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.454 [2024-11-20 22:35:43.997042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.454 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.454 [2024-11-20 22:35:44.014664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.454 [2024-11-20 22:35:44.014692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.454 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.454 [2024-11-20 22:35:44.029912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.454 [2024-11-20 22:35:44.029938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.454 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.454 [2024-11-20 22:35:44.042784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.454 [2024-11-20 22:35:44.042811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.454 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.454 [2024-11-20 22:35:44.053787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.454 [2024-11-20 22:35:44.053814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.454 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.454 [2024-11-20 22:35:44.069094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.454 [2024-11-20 22:35:44.069120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.454 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.454 [2024-11-20 22:35:44.086005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.454 [2024-11-20 22:35:44.086032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.454 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.454 [2024-11-20 22:35:44.102172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.454 [2024-11-20 22:35:44.102199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.454 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.454 [2024-11-20 22:35:44.118869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.454 [2024-11-20 22:35:44.118896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.454 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.454 [2024-11-20 22:35:44.135182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.454 [2024-11-20 22:35:44.135209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.454 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.454 [2024-11-20 22:35:44.151149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.454 [2024-11-20 22:35:44.151175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.454 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.454 [2024-11-20 22:35:44.164388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.454 [2024-11-20 22:35:44.164415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.454 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.454 [2024-11-20 22:35:44.180091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.454 [2024-11-20 22:35:44.180118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.454 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.713 [2024-11-20 22:35:44.196551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.713 [2024-11-20 22:35:44.196580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.713 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.713 [2024-11-20 22:35:44.213036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.713 [2024-11-20 22:35:44.213063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.713 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.713 [2024-11-20 22:35:44.229961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.713 [2024-11-20 22:35:44.229988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.713 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.713 [2024-11-20 22:35:44.245873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.713 [2024-11-20 22:35:44.245899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.713 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.713 [2024-11-20 22:35:44.261899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.713 [2024-11-20 22:35:44.261925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.713 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.713 [2024-11-20 22:35:44.278077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.713 [2024-11-20 22:35:44.278104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.713 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.713 [2024-11-20 22:35:44.293968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.713 [2024-11-20 22:35:44.293995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.713 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.713 [2024-11-20 22:35:44.308531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.713 [2024-11-20 22:35:44.308557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.713 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.713 [2024-11-20 22:35:44.320134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.713 [2024-11-20 22:35:44.320163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.713 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.713 [2024-11-20 22:35:44.335895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.713 [2024-11-20 22:35:44.335922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.713 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.713 [2024-11-20 22:35:44.352124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.713 [2024-11-20 22:35:44.352150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.713 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.713 [2024-11-20 22:35:44.368033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.713 [2024-11-20 22:35:44.368059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.713 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.713 [2024-11-20 22:35:44.384747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.713 [2024-11-20 22:35:44.384774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.713 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.713 [2024-11-20 22:35:44.400833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.713 [2024-11-20 22:35:44.400860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.713 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.713 [2024-11-20 22:35:44.413105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.713 [2024-11-20 22:35:44.413149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.713 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.713 [2024-11-20 22:35:44.429282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.713 [2024-11-20 22:35:44.429324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.713 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.972 [2024-11-20 22:35:44.447181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.972 [2024-11-20 22:35:44.447224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.972 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.973 [2024-11-20 22:35:44.462072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.973 [2024-11-20 22:35:44.462098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.973 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.973 [2024-11-20 22:35:44.472553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.973 [2024-11-20 22:35:44.472581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.973 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.973 [2024-11-20 22:35:44.488670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.973 [2024-11-20 22:35:44.488697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.973 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.973 [2024-11-20 22:35:44.505021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.973 [2024-11-20 22:35:44.505050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.973 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.973 [2024-11-20 22:35:44.521587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.973 [2024-11-20 22:35:44.521631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.973 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.973 [2024-11-20 22:35:44.538368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.973 [2024-11-20 22:35:44.538409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.973 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.973 [2024-11-20 22:35:44.554870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.973 [2024-11-20 22:35:44.554898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.973 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.973 [2024-11-20 22:35:44.571564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.973 [2024-11-20 22:35:44.571607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.973 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.973 [2024-11-20 22:35:44.587165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.973 [2024-11-20 22:35:44.587192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.973 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.973 [2024-11-20 22:35:44.603474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.973 [2024-11-20 22:35:44.603501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.973 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.973 [2024-11-20 22:35:44.619761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.973 [2024-11-20 22:35:44.619788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.973 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.973 [2024-11-20 22:35:44.636045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.973 [2024-11-20 22:35:44.636072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.973 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.973 [2024-11-20 22:35:44.647362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.973 [2024-11-20 22:35:44.647392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.973 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.973 [2024-11-20 22:35:44.663396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.973 [2024-11-20 22:35:44.663437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.973 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.973 [2024-11-20 22:35:44.680226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.973 [2024-11-20 22:35:44.680253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.973 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.973 [2024-11-20 22:35:44.696702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.973 [2024-11-20 22:35:44.696729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.973 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.233 [2024-11-20 22:35:44.713512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.233 [2024-11-20 22:35:44.713556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.233 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.233 [2024-11-20 22:35:44.730684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.233 [2024-11-20 22:35:44.730711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.233 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.233 [2024-11-20 22:35:44.746571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.233 [2024-11-20 22:35:44.746598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.233 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.233 [2024-11-20 22:35:44.762888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.233 [2024-11-20 22:35:44.762916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.233 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.233 [2024-11-20 22:35:44.773765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.233 [2024-11-20 22:35:44.773792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.233 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.233 [2024-11-20 22:35:44.790479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.233 [2024-11-20 22:35:44.790522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.233 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.233 [2024-11-20 22:35:44.806583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.233 [2024-11-20 22:35:44.806610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.233 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.233 [2024-11-20 22:35:44.822907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.233 [2024-11-20 22:35:44.822933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.233 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.233 [2024-11-20 22:35:44.839270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.233 [2024-11-20 22:35:44.839321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.233 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.233 [2024-11-20 22:35:44.856060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.233 [2024-11-20 22:35:44.856102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.233 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.233 [2024-11-20 22:35:44.872914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.233 [2024-11-20 22:35:44.872956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.233 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.233 [2024-11-20 22:35:44.889172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.233 [2024-11-20 22:35:44.889208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.233 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.233 [2024-11-20 22:35:44.905635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.233 [2024-11-20 22:35:44.905662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.233 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.233 [2024-11-20 22:35:44.922115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.233 [2024-11-20 22:35:44.922142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.233 00:15:44.233 Latency(us) 00:15:44.233 [2024-11-20T22:35:44.967Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:44.233 [2024-11-20T22:35:44.967Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:44.233 Nvme1n1 : 5.01 14025.55 109.57 0.00 0.00 9116.26 3902.37 20971.52 00:15:44.233 [2024-11-20T22:35:44.967Z] =================================================================================================================== 00:15:44.233 [2024-11-20T22:35:44.967Z] Total : 14025.55 109.57 0.00 0.00 9116.26 3902.37 20971.52 00:15:44.233 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.233 [2024-11-20 22:35:44.933905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.233 [2024-11-20 22:35:44.933946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.233 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.233 [2024-11-20 22:35:44.945892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.233 [2024-11-20 22:35:44.945932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.233 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.233 [2024-11-20 22:35:44.957878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.233 [2024-11-20 22:35:44.957914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.233 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.492 [2024-11-20 22:35:44.969884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.492 [2024-11-20 22:35:44.969922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.492 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.492 [2024-11-20 22:35:44.981885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.492 [2024-11-20 22:35:44.981920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.492 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.492 [2024-11-20 22:35:44.993883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.492 [2024-11-20 22:35:44.993918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.492 2024/11/20 22:35:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.492 [2024-11-20 22:35:45.005886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.492 [2024-11-20 22:35:45.005921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.492 2024/11/20 22:35:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.492 [2024-11-20 22:35:45.017908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.493 [2024-11-20 22:35:45.017944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.493 2024/11/20 22:35:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.493 [2024-11-20 22:35:45.029896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.493 [2024-11-20 22:35:45.029930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.493 2024/11/20 22:35:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.493 [2024-11-20 22:35:45.041896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.493 [2024-11-20 22:35:45.041915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.493 2024/11/20 22:35:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.493 [2024-11-20 22:35:45.053904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.493 [2024-11-20 22:35:45.053942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.493 2024/11/20 22:35:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.493 [2024-11-20 22:35:45.065925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.493 [2024-11-20 22:35:45.065961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.493 2024/11/20 22:35:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.493 [2024-11-20 22:35:45.077922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.493 [2024-11-20 22:35:45.077956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.493 2024/11/20 22:35:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.493 [2024-11-20 22:35:45.089925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.493 [2024-11-20 22:35:45.089944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.493 2024/11/20 22:35:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.493 [2024-11-20 22:35:45.101944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.493 [2024-11-20 22:35:45.101979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.493 2024/11/20 22:35:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.493 [2024-11-20 22:35:45.113963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.493 [2024-11-20 22:35:45.113998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.493 2024/11/20 22:35:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.493 [2024-11-20 22:35:45.125930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.493 [2024-11-20 22:35:45.125964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.493 2024/11/20 22:35:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.493 [2024-11-20 22:35:45.137931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.493 [2024-11-20 22:35:45.137965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.493 2024/11/20 22:35:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.493 [2024-11-20 22:35:45.149933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.493 [2024-11-20 22:35:45.149966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.493 2024/11/20 22:35:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.493 [2024-11-20 22:35:45.161940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.493 [2024-11-20 22:35:45.161974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.493 2024/11/20 22:35:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.493 [2024-11-20 22:35:45.173944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.493 [2024-11-20 22:35:45.173978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.493 2024/11/20 22:35:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.493 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (86163) - No such process 00:15:44.493 22:35:45 -- target/zcopy.sh@49 -- # wait 86163 00:15:44.493 22:35:45 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:44.493 22:35:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.493 22:35:45 -- common/autotest_common.sh@10 -- # set +x 00:15:44.493 22:35:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.493 22:35:45 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:44.493 22:35:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.493 22:35:45 -- common/autotest_common.sh@10 -- # set +x 00:15:44.493 delay0 00:15:44.493 22:35:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.493 22:35:45 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:44.493 22:35:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.493 22:35:45 -- common/autotest_common.sh@10 -- # set +x 00:15:44.493 22:35:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.493 22:35:45 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:44.751 [2024-11-20 22:35:45.360406] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:52.869 Initializing NVMe Controllers 00:15:52.869 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:52.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:52.869 Initialization complete. Launching workers. 00:15:52.869 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 261, failed: 22684 00:15:52.869 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 22857, failed to submit 88 00:15:52.869 success 22756, unsuccess 101, failed 0 00:15:52.869 22:35:52 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:52.869 22:35:52 -- target/zcopy.sh@60 -- # nvmftestfini 00:15:52.869 22:35:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:52.869 22:35:52 -- nvmf/common.sh@116 -- # sync 00:15:52.869 22:35:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:52.869 22:35:52 -- nvmf/common.sh@119 -- # set +e 00:15:52.869 22:35:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:52.869 22:35:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:52.869 rmmod nvme_tcp 00:15:52.869 rmmod nvme_fabrics 00:15:52.869 rmmod nvme_keyring 00:15:52.869 22:35:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:52.869 22:35:52 -- nvmf/common.sh@123 -- # set -e 00:15:52.869 22:35:52 -- nvmf/common.sh@124 -- # return 0 00:15:52.869 22:35:52 -- nvmf/common.sh@477 -- # '[' -n 85989 ']' 00:15:52.869 22:35:52 -- nvmf/common.sh@478 -- # killprocess 85989 00:15:52.869 22:35:52 -- common/autotest_common.sh@936 -- # '[' -z 85989 ']' 00:15:52.869 22:35:52 -- common/autotest_common.sh@940 -- # kill -0 85989 00:15:52.869 22:35:52 -- common/autotest_common.sh@941 -- # uname 00:15:52.869 22:35:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:52.869 22:35:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85989 00:15:52.869 22:35:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:52.869 22:35:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:52.869 killing process with pid 85989 00:15:52.869 22:35:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85989' 00:15:52.869 22:35:52 -- common/autotest_common.sh@955 -- # kill 85989 00:15:52.869 22:35:52 -- common/autotest_common.sh@960 -- # wait 85989 00:15:52.869 22:35:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:52.869 22:35:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:52.869 22:35:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:52.869 22:35:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:52.869 22:35:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:52.869 22:35:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.869 22:35:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.869 22:35:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.869 22:35:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:52.869 00:15:52.869 real 0m25.676s 00:15:52.869 user 0m39.764s 00:15:52.869 sys 0m8.025s 00:15:52.869 22:35:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:52.869 22:35:52 -- common/autotest_common.sh@10 -- # set +x 00:15:52.869 ************************************ 00:15:52.869 END TEST nvmf_zcopy 00:15:52.869 ************************************ 00:15:52.869 22:35:52 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:52.869 22:35:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:52.869 22:35:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:52.869 22:35:52 -- common/autotest_common.sh@10 -- # set +x 00:15:52.869 ************************************ 00:15:52.869 START TEST nvmf_nmic 00:15:52.869 ************************************ 00:15:52.869 22:35:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:52.869 * Looking for test storage... 00:15:52.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:52.869 22:35:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:52.869 22:35:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:52.869 22:35:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:52.869 22:35:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:52.869 22:35:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:52.869 22:35:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:52.869 22:35:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:52.869 22:35:52 -- scripts/common.sh@335 -- # IFS=.-: 00:15:52.869 22:35:52 -- scripts/common.sh@335 -- # read -ra ver1 00:15:52.869 22:35:52 -- scripts/common.sh@336 -- # IFS=.-: 00:15:52.869 22:35:52 -- scripts/common.sh@336 -- # read -ra ver2 00:15:52.869 22:35:52 -- scripts/common.sh@337 -- # local 'op=<' 00:15:52.869 22:35:52 -- scripts/common.sh@339 -- # ver1_l=2 00:15:52.869 22:35:52 -- scripts/common.sh@340 -- # ver2_l=1 00:15:52.869 22:35:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:52.869 22:35:52 -- scripts/common.sh@343 -- # case "$op" in 00:15:52.869 22:35:52 -- scripts/common.sh@344 -- # : 1 00:15:52.869 22:35:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:52.869 22:35:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:52.869 22:35:52 -- scripts/common.sh@364 -- # decimal 1 00:15:52.869 22:35:53 -- scripts/common.sh@352 -- # local d=1 00:15:52.869 22:35:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:52.869 22:35:53 -- scripts/common.sh@354 -- # echo 1 00:15:52.869 22:35:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:52.869 22:35:53 -- scripts/common.sh@365 -- # decimal 2 00:15:52.870 22:35:53 -- scripts/common.sh@352 -- # local d=2 00:15:52.870 22:35:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:52.870 22:35:53 -- scripts/common.sh@354 -- # echo 2 00:15:52.870 22:35:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:52.870 22:35:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:52.870 22:35:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:52.870 22:35:53 -- scripts/common.sh@367 -- # return 0 00:15:52.870 22:35:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:52.870 22:35:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:52.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.870 --rc genhtml_branch_coverage=1 00:15:52.870 --rc genhtml_function_coverage=1 00:15:52.870 --rc genhtml_legend=1 00:15:52.870 --rc geninfo_all_blocks=1 00:15:52.870 --rc geninfo_unexecuted_blocks=1 00:15:52.870 00:15:52.870 ' 00:15:52.870 22:35:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:52.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.870 --rc genhtml_branch_coverage=1 00:15:52.870 --rc genhtml_function_coverage=1 00:15:52.870 --rc genhtml_legend=1 00:15:52.870 --rc geninfo_all_blocks=1 00:15:52.870 --rc geninfo_unexecuted_blocks=1 00:15:52.870 00:15:52.870 ' 00:15:52.870 22:35:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:52.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.870 --rc genhtml_branch_coverage=1 00:15:52.870 --rc genhtml_function_coverage=1 00:15:52.870 --rc genhtml_legend=1 00:15:52.870 --rc geninfo_all_blocks=1 00:15:52.870 --rc geninfo_unexecuted_blocks=1 00:15:52.870 00:15:52.870 ' 00:15:52.870 22:35:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:52.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.870 --rc genhtml_branch_coverage=1 00:15:52.870 --rc genhtml_function_coverage=1 00:15:52.870 --rc genhtml_legend=1 00:15:52.870 --rc geninfo_all_blocks=1 00:15:52.870 --rc geninfo_unexecuted_blocks=1 00:15:52.870 00:15:52.870 ' 00:15:52.870 22:35:53 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:52.870 22:35:53 -- nvmf/common.sh@7 -- # uname -s 00:15:52.870 22:35:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.870 22:35:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.870 22:35:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.870 22:35:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.870 22:35:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.870 22:35:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.870 22:35:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.870 22:35:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.870 22:35:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.870 22:35:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.870 22:35:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:15:52.870 22:35:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:15:52.870 22:35:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.870 22:35:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.870 22:35:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:52.870 22:35:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:52.870 22:35:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.870 22:35:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.870 22:35:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.870 22:35:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.870 22:35:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.870 22:35:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.870 22:35:53 -- paths/export.sh@5 -- # export PATH 00:15:52.870 22:35:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.870 22:35:53 -- nvmf/common.sh@46 -- # : 0 00:15:52.870 22:35:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:52.870 22:35:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:52.870 22:35:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:52.870 22:35:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.870 22:35:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.870 22:35:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:52.870 22:35:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:52.870 22:35:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:52.870 22:35:53 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:52.870 22:35:53 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:52.870 22:35:53 -- target/nmic.sh@14 -- # nvmftestinit 00:15:52.870 22:35:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:52.870 22:35:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.870 22:35:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:52.870 22:35:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:52.870 22:35:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:52.870 22:35:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.870 22:35:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.870 22:35:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.870 22:35:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:52.870 22:35:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:52.870 22:35:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:52.870 22:35:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:52.870 22:35:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:52.870 22:35:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:52.870 22:35:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.870 22:35:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:52.870 22:35:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:52.870 22:35:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:52.870 22:35:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:52.870 22:35:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:52.870 22:35:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:52.870 22:35:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.870 22:35:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:52.870 22:35:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:52.870 22:35:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:52.870 22:35:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:52.870 22:35:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:52.870 22:35:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:52.870 Cannot find device "nvmf_tgt_br" 00:15:52.870 22:35:53 -- nvmf/common.sh@154 -- # true 00:15:52.870 22:35:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:52.870 Cannot find device "nvmf_tgt_br2" 00:15:52.871 22:35:53 -- nvmf/common.sh@155 -- # true 00:15:52.871 22:35:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:52.871 22:35:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:52.871 Cannot find device "nvmf_tgt_br" 00:15:52.871 22:35:53 -- nvmf/common.sh@157 -- # true 00:15:52.871 22:35:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:52.871 Cannot find device "nvmf_tgt_br2" 00:15:52.871 22:35:53 -- nvmf/common.sh@158 -- # true 00:15:52.871 22:35:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:52.871 22:35:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:52.871 22:35:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:52.871 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.871 22:35:53 -- nvmf/common.sh@161 -- # true 00:15:52.871 22:35:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:52.871 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.871 22:35:53 -- nvmf/common.sh@162 -- # true 00:15:52.871 22:35:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:52.871 22:35:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:52.871 22:35:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:52.871 22:35:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:52.871 22:35:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:52.871 22:35:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:52.871 22:35:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:52.871 22:35:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:52.871 22:35:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:52.871 22:35:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:52.871 22:35:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:52.871 22:35:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:52.871 22:35:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:52.871 22:35:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:52.871 22:35:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:52.871 22:35:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:52.871 22:35:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:52.871 22:35:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:52.871 22:35:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:52.871 22:35:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:52.871 22:35:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:52.871 22:35:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:52.871 22:35:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:52.871 22:35:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:52.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:15:52.871 00:15:52.871 --- 10.0.0.2 ping statistics --- 00:15:52.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.871 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:15:52.871 22:35:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:52.871 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:52.871 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:15:52.871 00:15:52.871 --- 10.0.0.3 ping statistics --- 00:15:52.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.871 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:15:52.871 22:35:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:52.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:52.871 00:15:52.871 --- 10.0.0.1 ping statistics --- 00:15:52.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.871 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:52.871 22:35:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.871 22:35:53 -- nvmf/common.sh@421 -- # return 0 00:15:52.871 22:35:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:52.871 22:35:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.871 22:35:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:52.871 22:35:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:52.871 22:35:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.871 22:35:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:52.871 22:35:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:52.871 22:35:53 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:52.871 22:35:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:52.871 22:35:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:52.871 22:35:53 -- common/autotest_common.sh@10 -- # set +x 00:15:52.871 22:35:53 -- nvmf/common.sh@469 -- # nvmfpid=86497 00:15:52.871 22:35:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:52.871 22:35:53 -- nvmf/common.sh@470 -- # waitforlisten 86497 00:15:52.871 22:35:53 -- common/autotest_common.sh@829 -- # '[' -z 86497 ']' 00:15:52.871 22:35:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.871 22:35:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.871 22:35:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.871 22:35:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.871 22:35:53 -- common/autotest_common.sh@10 -- # set +x 00:15:52.871 [2024-11-20 22:35:53.453063] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:52.871 [2024-11-20 22:35:53.453155] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.871 [2024-11-20 22:35:53.592563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:53.129 [2024-11-20 22:35:53.673726] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:53.129 [2024-11-20 22:35:53.673871] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.129 [2024-11-20 22:35:53.673883] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.129 [2024-11-20 22:35:53.673890] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.129 [2024-11-20 22:35:53.673995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.129 [2024-11-20 22:35:53.674842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:53.129 [2024-11-20 22:35:53.675028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:53.129 [2024-11-20 22:35:53.675032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.063 22:35:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:54.063 22:35:54 -- common/autotest_common.sh@862 -- # return 0 00:15:54.063 22:35:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:54.063 22:35:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:54.063 22:35:54 -- common/autotest_common.sh@10 -- # set +x 00:15:54.063 22:35:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.063 22:35:54 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:54.063 22:35:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.063 22:35:54 -- common/autotest_common.sh@10 -- # set +x 00:15:54.063 [2024-11-20 22:35:54.548992] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:54.063 22:35:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.063 22:35:54 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:54.063 22:35:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.063 22:35:54 -- common/autotest_common.sh@10 -- # set +x 00:15:54.063 Malloc0 00:15:54.063 22:35:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.063 22:35:54 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:54.063 22:35:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.063 22:35:54 -- common/autotest_common.sh@10 -- # set +x 00:15:54.063 22:35:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.063 22:35:54 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:54.063 22:35:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.063 22:35:54 -- common/autotest_common.sh@10 -- # set +x 00:15:54.063 22:35:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.063 22:35:54 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:54.064 22:35:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.064 22:35:54 -- common/autotest_common.sh@10 -- # set +x 00:15:54.064 [2024-11-20 22:35:54.619219] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.064 22:35:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.064 22:35:54 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:54.064 test case1: single bdev can't be used in multiple subsystems 00:15:54.064 22:35:54 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:54.064 22:35:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.064 22:35:54 -- common/autotest_common.sh@10 -- # set +x 00:15:54.064 22:35:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.064 22:35:54 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:54.064 22:35:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.064 22:35:54 -- common/autotest_common.sh@10 -- # set +x 00:15:54.064 22:35:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.064 22:35:54 -- target/nmic.sh@28 -- # nmic_status=0 00:15:54.064 22:35:54 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:54.064 22:35:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.064 22:35:54 -- common/autotest_common.sh@10 -- # set +x 00:15:54.064 [2024-11-20 22:35:54.643056] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:54.064 [2024-11-20 22:35:54.643088] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:54.064 [2024-11-20 22:35:54.643098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.064 2024/11/20 22:35:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.064 request: 00:15:54.064 { 00:15:54.064 "method": "nvmf_subsystem_add_ns", 00:15:54.064 "params": { 00:15:54.064 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:54.064 "namespace": { 00:15:54.064 "bdev_name": "Malloc0" 00:15:54.064 } 00:15:54.064 } 00:15:54.064 } 00:15:54.064 Got JSON-RPC error response 00:15:54.064 GoRPCClient: error on JSON-RPC call 00:15:54.064 22:35:54 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:54.064 22:35:54 -- target/nmic.sh@29 -- # nmic_status=1 00:15:54.064 22:35:54 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:54.064 Adding namespace failed - expected result. 00:15:54.064 22:35:54 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:54.064 test case2: host connect to nvmf target in multiple paths 00:15:54.064 22:35:54 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:54.064 22:35:54 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:54.064 22:35:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.064 22:35:54 -- common/autotest_common.sh@10 -- # set +x 00:15:54.064 [2024-11-20 22:35:54.655141] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:54.064 22:35:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.064 22:35:54 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:54.322 22:35:54 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:54.322 22:35:55 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:54.322 22:35:55 -- common/autotest_common.sh@1187 -- # local i=0 00:15:54.322 22:35:55 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:54.322 22:35:55 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:54.322 22:35:55 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:56.854 22:35:57 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:56.854 22:35:57 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:56.854 22:35:57 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:56.854 22:35:57 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:56.854 22:35:57 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:56.854 22:35:57 -- common/autotest_common.sh@1197 -- # return 0 00:15:56.854 22:35:57 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:56.854 [global] 00:15:56.854 thread=1 00:15:56.854 invalidate=1 00:15:56.854 rw=write 00:15:56.854 time_based=1 00:15:56.854 runtime=1 00:15:56.854 ioengine=libaio 00:15:56.854 direct=1 00:15:56.854 bs=4096 00:15:56.854 iodepth=1 00:15:56.854 norandommap=0 00:15:56.854 numjobs=1 00:15:56.854 00:15:56.854 verify_dump=1 00:15:56.854 verify_backlog=512 00:15:56.854 verify_state_save=0 00:15:56.854 do_verify=1 00:15:56.854 verify=crc32c-intel 00:15:56.854 [job0] 00:15:56.854 filename=/dev/nvme0n1 00:15:56.854 Could not set queue depth (nvme0n1) 00:15:56.854 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:56.854 fio-3.35 00:15:56.854 Starting 1 thread 00:15:57.790 00:15:57.790 job0: (groupid=0, jobs=1): err= 0: pid=86607: Wed Nov 20 22:35:58 2024 00:15:57.790 read: IOPS=3120, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1001msec) 00:15:57.790 slat (nsec): min=13123, max=85636, avg=17169.30, stdev=5874.50 00:15:57.790 clat (usec): min=118, max=1524, avg=148.75, stdev=29.56 00:15:57.790 lat (usec): min=132, max=1540, avg=165.92, stdev=30.30 00:15:57.790 clat percentiles (usec): 00:15:57.790 | 1.00th=[ 124], 5.00th=[ 128], 10.00th=[ 131], 20.00th=[ 135], 00:15:57.790 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 149], 00:15:57.790 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 172], 95.00th=[ 180], 00:15:57.790 | 99.00th=[ 198], 99.50th=[ 208], 99.90th=[ 260], 99.95th=[ 269], 00:15:57.790 | 99.99th=[ 1532] 00:15:57.790 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:15:57.790 slat (usec): min=20, max=119, avg=26.22, stdev= 7.06 00:15:57.790 clat (usec): min=72, max=177, avg=104.89, stdev=12.48 00:15:57.790 lat (usec): min=103, max=296, avg=131.11, stdev=14.95 00:15:57.790 clat percentiles (usec): 00:15:57.790 | 1.00th=[ 86], 5.00th=[ 90], 10.00th=[ 92], 20.00th=[ 95], 00:15:57.790 | 30.00th=[ 98], 40.00th=[ 100], 50.00th=[ 103], 60.00th=[ 105], 00:15:57.790 | 70.00th=[ 110], 80.00th=[ 114], 90.00th=[ 123], 95.00th=[ 129], 00:15:57.790 | 99.00th=[ 147], 99.50th=[ 153], 99.90th=[ 165], 99.95th=[ 178], 00:15:57.790 | 99.99th=[ 178] 00:15:57.790 bw ( KiB/s): min=14016, max=14016, per=97.87%, avg=14016.00, stdev= 0.00, samples=1 00:15:57.790 iops : min= 3504, max= 3504, avg=3504.00, stdev= 0.00, samples=1 00:15:57.790 lat (usec) : 100=21.66%, 250=78.28%, 500=0.04% 00:15:57.790 lat (msec) : 2=0.01% 00:15:57.790 cpu : usr=2.50%, sys=10.70%, ctx=6708, majf=0, minf=5 00:15:57.790 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:57.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.790 issued rwts: total=3124,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.790 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:57.790 00:15:57.790 Run status group 0 (all jobs): 00:15:57.790 READ: bw=12.2MiB/s (12.8MB/s), 12.2MiB/s-12.2MiB/s (12.8MB/s-12.8MB/s), io=12.2MiB (12.8MB), run=1001-1001msec 00:15:57.790 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:15:57.790 00:15:57.790 Disk stats (read/write): 00:15:57.790 nvme0n1: ios=2988/3072, merge=0/0, ticks=504/384, in_queue=888, util=91.58% 00:15:57.790 22:35:58 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:57.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:57.790 22:35:58 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:57.790 22:35:58 -- common/autotest_common.sh@1208 -- # local i=0 00:15:57.790 22:35:58 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:57.790 22:35:58 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:57.790 22:35:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:57.790 22:35:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:57.790 22:35:58 -- common/autotest_common.sh@1220 -- # return 0 00:15:57.790 22:35:58 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:57.790 22:35:58 -- target/nmic.sh@53 -- # nvmftestfini 00:15:57.790 22:35:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:57.790 22:35:58 -- nvmf/common.sh@116 -- # sync 00:15:57.790 22:35:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:57.790 22:35:58 -- nvmf/common.sh@119 -- # set +e 00:15:57.790 22:35:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:57.790 22:35:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:57.790 rmmod nvme_tcp 00:15:57.790 rmmod nvme_fabrics 00:15:57.790 rmmod nvme_keyring 00:15:58.049 22:35:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:58.049 22:35:58 -- nvmf/common.sh@123 -- # set -e 00:15:58.049 22:35:58 -- nvmf/common.sh@124 -- # return 0 00:15:58.049 22:35:58 -- nvmf/common.sh@477 -- # '[' -n 86497 ']' 00:15:58.049 22:35:58 -- nvmf/common.sh@478 -- # killprocess 86497 00:15:58.049 22:35:58 -- common/autotest_common.sh@936 -- # '[' -z 86497 ']' 00:15:58.049 22:35:58 -- common/autotest_common.sh@940 -- # kill -0 86497 00:15:58.049 22:35:58 -- common/autotest_common.sh@941 -- # uname 00:15:58.049 22:35:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:58.049 22:35:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86497 00:15:58.049 22:35:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:58.049 22:35:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:58.049 killing process with pid 86497 00:15:58.049 22:35:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86497' 00:15:58.049 22:35:58 -- common/autotest_common.sh@955 -- # kill 86497 00:15:58.049 22:35:58 -- common/autotest_common.sh@960 -- # wait 86497 00:15:58.307 22:35:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:58.307 22:35:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:58.307 22:35:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:58.307 22:35:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:58.307 22:35:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:58.307 22:35:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.307 22:35:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:58.307 22:35:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.307 22:35:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:58.307 00:15:58.307 real 0m6.056s 00:15:58.307 user 0m20.432s 00:15:58.307 sys 0m1.387s 00:15:58.307 22:35:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:58.307 ************************************ 00:15:58.307 END TEST nvmf_nmic 00:15:58.307 22:35:58 -- common/autotest_common.sh@10 -- # set +x 00:15:58.307 ************************************ 00:15:58.307 22:35:58 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:58.307 22:35:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:58.307 22:35:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:58.307 22:35:58 -- common/autotest_common.sh@10 -- # set +x 00:15:58.307 ************************************ 00:15:58.307 START TEST nvmf_fio_target 00:15:58.307 ************************************ 00:15:58.307 22:35:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:58.307 * Looking for test storage... 00:15:58.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:58.307 22:35:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:58.307 22:35:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:58.307 22:35:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:58.567 22:35:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:58.567 22:35:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:58.567 22:35:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:58.567 22:35:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:58.567 22:35:59 -- scripts/common.sh@335 -- # IFS=.-: 00:15:58.567 22:35:59 -- scripts/common.sh@335 -- # read -ra ver1 00:15:58.567 22:35:59 -- scripts/common.sh@336 -- # IFS=.-: 00:15:58.567 22:35:59 -- scripts/common.sh@336 -- # read -ra ver2 00:15:58.567 22:35:59 -- scripts/common.sh@337 -- # local 'op=<' 00:15:58.567 22:35:59 -- scripts/common.sh@339 -- # ver1_l=2 00:15:58.567 22:35:59 -- scripts/common.sh@340 -- # ver2_l=1 00:15:58.567 22:35:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:58.567 22:35:59 -- scripts/common.sh@343 -- # case "$op" in 00:15:58.567 22:35:59 -- scripts/common.sh@344 -- # : 1 00:15:58.567 22:35:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:58.567 22:35:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:58.567 22:35:59 -- scripts/common.sh@364 -- # decimal 1 00:15:58.567 22:35:59 -- scripts/common.sh@352 -- # local d=1 00:15:58.567 22:35:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:58.567 22:35:59 -- scripts/common.sh@354 -- # echo 1 00:15:58.567 22:35:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:58.567 22:35:59 -- scripts/common.sh@365 -- # decimal 2 00:15:58.567 22:35:59 -- scripts/common.sh@352 -- # local d=2 00:15:58.567 22:35:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:58.567 22:35:59 -- scripts/common.sh@354 -- # echo 2 00:15:58.567 22:35:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:58.567 22:35:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:58.567 22:35:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:58.567 22:35:59 -- scripts/common.sh@367 -- # return 0 00:15:58.567 22:35:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:58.567 22:35:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:58.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.567 --rc genhtml_branch_coverage=1 00:15:58.567 --rc genhtml_function_coverage=1 00:15:58.567 --rc genhtml_legend=1 00:15:58.567 --rc geninfo_all_blocks=1 00:15:58.567 --rc geninfo_unexecuted_blocks=1 00:15:58.567 00:15:58.567 ' 00:15:58.567 22:35:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:58.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.567 --rc genhtml_branch_coverage=1 00:15:58.567 --rc genhtml_function_coverage=1 00:15:58.567 --rc genhtml_legend=1 00:15:58.567 --rc geninfo_all_blocks=1 00:15:58.567 --rc geninfo_unexecuted_blocks=1 00:15:58.567 00:15:58.567 ' 00:15:58.567 22:35:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:58.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.567 --rc genhtml_branch_coverage=1 00:15:58.567 --rc genhtml_function_coverage=1 00:15:58.567 --rc genhtml_legend=1 00:15:58.567 --rc geninfo_all_blocks=1 00:15:58.567 --rc geninfo_unexecuted_blocks=1 00:15:58.567 00:15:58.567 ' 00:15:58.567 22:35:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:58.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.567 --rc genhtml_branch_coverage=1 00:15:58.567 --rc genhtml_function_coverage=1 00:15:58.567 --rc genhtml_legend=1 00:15:58.567 --rc geninfo_all_blocks=1 00:15:58.567 --rc geninfo_unexecuted_blocks=1 00:15:58.567 00:15:58.567 ' 00:15:58.567 22:35:59 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:58.567 22:35:59 -- nvmf/common.sh@7 -- # uname -s 00:15:58.567 22:35:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.567 22:35:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.567 22:35:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.567 22:35:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.567 22:35:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.567 22:35:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.567 22:35:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.567 22:35:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.567 22:35:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.567 22:35:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.567 22:35:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:15:58.567 22:35:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:15:58.567 22:35:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.567 22:35:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.567 22:35:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:58.567 22:35:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:58.567 22:35:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.567 22:35:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.567 22:35:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.567 22:35:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.567 22:35:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.568 22:35:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.568 22:35:59 -- paths/export.sh@5 -- # export PATH 00:15:58.568 22:35:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.568 22:35:59 -- nvmf/common.sh@46 -- # : 0 00:15:58.568 22:35:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:58.568 22:35:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:58.568 22:35:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:58.568 22:35:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.568 22:35:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.568 22:35:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:58.568 22:35:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:58.568 22:35:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:58.568 22:35:59 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:58.568 22:35:59 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:58.568 22:35:59 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:58.568 22:35:59 -- target/fio.sh@16 -- # nvmftestinit 00:15:58.568 22:35:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:58.568 22:35:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:58.568 22:35:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:58.568 22:35:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:58.568 22:35:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:58.568 22:35:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.568 22:35:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:58.568 22:35:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.568 22:35:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:58.568 22:35:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:58.568 22:35:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:58.568 22:35:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:58.568 22:35:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:58.568 22:35:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:58.568 22:35:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:58.568 22:35:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:58.568 22:35:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:58.568 22:35:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:58.568 22:35:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:58.568 22:35:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:58.568 22:35:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:58.568 22:35:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:58.568 22:35:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:58.568 22:35:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:58.568 22:35:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:58.568 22:35:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:58.568 22:35:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:58.568 22:35:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:58.568 Cannot find device "nvmf_tgt_br" 00:15:58.568 22:35:59 -- nvmf/common.sh@154 -- # true 00:15:58.568 22:35:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:58.568 Cannot find device "nvmf_tgt_br2" 00:15:58.568 22:35:59 -- nvmf/common.sh@155 -- # true 00:15:58.568 22:35:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:58.568 22:35:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:58.568 Cannot find device "nvmf_tgt_br" 00:15:58.568 22:35:59 -- nvmf/common.sh@157 -- # true 00:15:58.568 22:35:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:58.568 Cannot find device "nvmf_tgt_br2" 00:15:58.568 22:35:59 -- nvmf/common.sh@158 -- # true 00:15:58.568 22:35:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:58.568 22:35:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:58.568 22:35:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:58.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:58.568 22:35:59 -- nvmf/common.sh@161 -- # true 00:15:58.568 22:35:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:58.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:58.568 22:35:59 -- nvmf/common.sh@162 -- # true 00:15:58.568 22:35:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:58.568 22:35:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:58.568 22:35:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:58.826 22:35:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:58.826 22:35:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:58.826 22:35:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:58.826 22:35:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:58.826 22:35:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:58.826 22:35:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:58.826 22:35:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:58.826 22:35:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:58.826 22:35:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:58.826 22:35:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:58.826 22:35:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:58.826 22:35:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:58.826 22:35:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:58.827 22:35:59 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:58.827 22:35:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:58.827 22:35:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:58.827 22:35:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:58.827 22:35:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:58.827 22:35:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:58.827 22:35:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:58.827 22:35:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:58.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:58.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:15:58.827 00:15:58.827 --- 10.0.0.2 ping statistics --- 00:15:58.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.827 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:15:58.827 22:35:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:58.827 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:58.827 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:15:58.827 00:15:58.827 --- 10.0.0.3 ping statistics --- 00:15:58.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.827 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:15:58.827 22:35:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:58.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:58.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:15:58.827 00:15:58.827 --- 10.0.0.1 ping statistics --- 00:15:58.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.827 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:58.827 22:35:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:58.827 22:35:59 -- nvmf/common.sh@421 -- # return 0 00:15:58.827 22:35:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:58.827 22:35:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:58.827 22:35:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:58.827 22:35:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:58.827 22:35:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:58.827 22:35:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:58.827 22:35:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:58.827 22:35:59 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:58.827 22:35:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:58.827 22:35:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:58.827 22:35:59 -- common/autotest_common.sh@10 -- # set +x 00:15:58.827 22:35:59 -- nvmf/common.sh@469 -- # nvmfpid=86792 00:15:58.827 22:35:59 -- nvmf/common.sh@470 -- # waitforlisten 86792 00:15:58.827 22:35:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:58.827 22:35:59 -- common/autotest_common.sh@829 -- # '[' -z 86792 ']' 00:15:58.827 22:35:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.827 22:35:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:58.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.827 22:35:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.827 22:35:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:58.827 22:35:59 -- common/autotest_common.sh@10 -- # set +x 00:15:58.827 [2024-11-20 22:35:59.544637] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:58.827 [2024-11-20 22:35:59.544719] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.085 [2024-11-20 22:35:59.683116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:59.085 [2024-11-20 22:35:59.759669] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:59.085 [2024-11-20 22:35:59.760364] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:59.085 [2024-11-20 22:35:59.760608] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:59.085 [2024-11-20 22:35:59.760846] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:59.085 [2024-11-20 22:35:59.761299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:59.085 [2024-11-20 22:35:59.761643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:59.085 [2024-11-20 22:35:59.761668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.085 [2024-11-20 22:35:59.761399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:00.021 22:36:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:00.021 22:36:00 -- common/autotest_common.sh@862 -- # return 0 00:16:00.021 22:36:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:00.021 22:36:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:00.021 22:36:00 -- common/autotest_common.sh@10 -- # set +x 00:16:00.021 22:36:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:00.021 22:36:00 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:00.279 [2024-11-20 22:36:00.825697] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:00.279 22:36:00 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:00.538 22:36:01 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:00.538 22:36:01 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:00.827 22:36:01 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:00.827 22:36:01 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:01.141 22:36:01 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:01.141 22:36:01 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:01.399 22:36:01 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:01.399 22:36:01 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:01.657 22:36:02 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:01.915 22:36:02 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:01.915 22:36:02 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:02.174 22:36:02 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:02.174 22:36:02 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:02.432 22:36:03 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:02.432 22:36:03 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:02.689 22:36:03 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:02.947 22:36:03 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:02.947 22:36:03 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:03.205 22:36:03 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:03.205 22:36:03 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:03.463 22:36:04 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:03.721 [2024-11-20 22:36:04.275587] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:03.721 22:36:04 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:03.979 22:36:04 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:04.238 22:36:04 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:04.238 22:36:04 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:04.238 22:36:04 -- common/autotest_common.sh@1187 -- # local i=0 00:16:04.496 22:36:04 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:04.496 22:36:04 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:16:04.496 22:36:04 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:16:04.496 22:36:04 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:06.396 22:36:06 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:06.396 22:36:06 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:06.396 22:36:06 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:06.396 22:36:06 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:16:06.396 22:36:06 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:06.396 22:36:06 -- common/autotest_common.sh@1197 -- # return 0 00:16:06.396 22:36:06 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:06.396 [global] 00:16:06.396 thread=1 00:16:06.396 invalidate=1 00:16:06.396 rw=write 00:16:06.396 time_based=1 00:16:06.396 runtime=1 00:16:06.396 ioengine=libaio 00:16:06.396 direct=1 00:16:06.396 bs=4096 00:16:06.396 iodepth=1 00:16:06.396 norandommap=0 00:16:06.396 numjobs=1 00:16:06.396 00:16:06.396 verify_dump=1 00:16:06.396 verify_backlog=512 00:16:06.396 verify_state_save=0 00:16:06.396 do_verify=1 00:16:06.396 verify=crc32c-intel 00:16:06.396 [job0] 00:16:06.396 filename=/dev/nvme0n1 00:16:06.396 [job1] 00:16:06.396 filename=/dev/nvme0n2 00:16:06.396 [job2] 00:16:06.396 filename=/dev/nvme0n3 00:16:06.396 [job3] 00:16:06.396 filename=/dev/nvme0n4 00:16:06.396 Could not set queue depth (nvme0n1) 00:16:06.396 Could not set queue depth (nvme0n2) 00:16:06.396 Could not set queue depth (nvme0n3) 00:16:06.396 Could not set queue depth (nvme0n4) 00:16:06.654 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:06.654 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:06.654 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:06.654 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:06.654 fio-3.35 00:16:06.654 Starting 4 threads 00:16:08.026 00:16:08.026 job0: (groupid=0, jobs=1): err= 0: pid=87091: Wed Nov 20 22:36:08 2024 00:16:08.026 read: IOPS=1202, BW=4811KiB/s (4927kB/s)(4816KiB/1001msec) 00:16:08.026 slat (nsec): min=15869, max=84031, avg=21512.79, stdev=6570.32 00:16:08.026 clat (usec): min=201, max=3788, avg=369.85, stdev=106.74 00:16:08.026 lat (usec): min=257, max=3815, avg=391.36, stdev=106.98 00:16:08.026 clat percentiles (usec): 00:16:08.026 | 1.00th=[ 293], 5.00th=[ 318], 10.00th=[ 326], 20.00th=[ 338], 00:16:08.026 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 371], 00:16:08.026 | 70.00th=[ 379], 80.00th=[ 392], 90.00th=[ 416], 95.00th=[ 441], 00:16:08.026 | 99.00th=[ 494], 99.50th=[ 553], 99.90th=[ 668], 99.95th=[ 3785], 00:16:08.026 | 99.99th=[ 3785] 00:16:08.026 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:08.026 slat (usec): min=26, max=124, avg=41.73, stdev= 9.52 00:16:08.026 clat (usec): min=169, max=1171, avg=297.69, stdev=63.81 00:16:08.026 lat (usec): min=204, max=1208, avg=339.41, stdev=64.00 00:16:08.026 clat percentiles (usec): 00:16:08.026 | 1.00th=[ 210], 5.00th=[ 229], 10.00th=[ 241], 20.00th=[ 258], 00:16:08.026 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 293], 00:16:08.026 | 70.00th=[ 306], 80.00th=[ 334], 90.00th=[ 379], 95.00th=[ 400], 00:16:08.026 | 99.00th=[ 457], 99.50th=[ 506], 99.90th=[ 971], 99.95th=[ 1172], 00:16:08.026 | 99.99th=[ 1172] 00:16:08.026 bw ( KiB/s): min= 7072, max= 7072, per=24.69%, avg=7072.00, stdev= 0.00, samples=1 00:16:08.026 iops : min= 1768, max= 1768, avg=1768.00, stdev= 0.00, samples=1 00:16:08.026 lat (usec) : 250=8.94%, 500=90.29%, 750=0.58%, 1000=0.11% 00:16:08.026 lat (msec) : 2=0.04%, 4=0.04% 00:16:08.026 cpu : usr=1.80%, sys=6.70%, ctx=2741, majf=0, minf=15 00:16:08.027 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:08.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.027 issued rwts: total=1204,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.027 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:08.027 job1: (groupid=0, jobs=1): err= 0: pid=87092: Wed Nov 20 22:36:08 2024 00:16:08.027 read: IOPS=1659, BW=6637KiB/s (6797kB/s)(6644KiB/1001msec) 00:16:08.027 slat (nsec): min=10578, max=55362, avg=16626.74, stdev=5182.27 00:16:08.027 clat (usec): min=160, max=1979, avg=268.44, stdev=83.20 00:16:08.027 lat (usec): min=176, max=2010, avg=285.07, stdev=81.95 00:16:08.027 clat percentiles (usec): 00:16:08.027 | 1.00th=[ 176], 5.00th=[ 188], 10.00th=[ 196], 20.00th=[ 206], 00:16:08.027 | 30.00th=[ 215], 40.00th=[ 225], 50.00th=[ 235], 60.00th=[ 258], 00:16:08.027 | 70.00th=[ 322], 80.00th=[ 343], 90.00th=[ 363], 95.00th=[ 383], 00:16:08.027 | 99.00th=[ 469], 99.50th=[ 490], 99.90th=[ 529], 99.95th=[ 1975], 00:16:08.027 | 99.99th=[ 1975] 00:16:08.027 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:08.027 slat (nsec): min=12655, max=99418, avg=27226.93, stdev=8046.59 00:16:08.027 clat (usec): min=110, max=49300, avg=226.64, stdev=1086.40 00:16:08.027 lat (usec): min=136, max=49322, avg=253.87, stdev=1086.20 00:16:08.027 clat percentiles (usec): 00:16:08.027 | 1.00th=[ 126], 5.00th=[ 143], 10.00th=[ 151], 20.00th=[ 157], 00:16:08.027 | 30.00th=[ 165], 40.00th=[ 174], 50.00th=[ 182], 60.00th=[ 194], 00:16:08.027 | 70.00th=[ 212], 80.00th=[ 269], 90.00th=[ 297], 95.00th=[ 314], 00:16:08.027 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 396], 99.95th=[ 404], 00:16:08.027 | 99.99th=[49546] 00:16:08.027 bw ( KiB/s): min= 9960, max= 9960, per=34.77%, avg=9960.00, stdev= 0.00, samples=1 00:16:08.027 iops : min= 2490, max= 2490, avg=2490.00, stdev= 0.00, samples=1 00:16:08.027 lat (usec) : 250=68.27%, 500=31.54%, 750=0.13% 00:16:08.027 lat (msec) : 2=0.03%, 50=0.03% 00:16:08.027 cpu : usr=1.80%, sys=5.80%, ctx=3710, majf=0, minf=2 00:16:08.027 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:08.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.027 issued rwts: total=1661,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.027 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:08.027 job2: (groupid=0, jobs=1): err= 0: pid=87093: Wed Nov 20 22:36:08 2024 00:16:08.027 read: IOPS=1233, BW=4935KiB/s (5054kB/s)(4940KiB/1001msec) 00:16:08.027 slat (nsec): min=16097, max=85594, avg=26713.40, stdev=7012.44 00:16:08.027 clat (usec): min=184, max=747, avg=357.75, stdev=41.60 00:16:08.027 lat (usec): min=206, max=771, avg=384.47, stdev=42.23 00:16:08.027 clat percentiles (usec): 00:16:08.027 | 1.00th=[ 237], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 330], 00:16:08.027 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 359], 00:16:08.027 | 70.00th=[ 371], 80.00th=[ 383], 90.00th=[ 404], 95.00th=[ 424], 00:16:08.027 | 99.00th=[ 469], 99.50th=[ 502], 99.90th=[ 652], 99.95th=[ 750], 00:16:08.027 | 99.99th=[ 750] 00:16:08.027 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:08.027 slat (usec): min=31, max=129, avg=40.90, stdev= 9.31 00:16:08.027 clat (usec): min=165, max=863, avg=295.64, stdev=55.83 00:16:08.027 lat (usec): min=199, max=906, avg=336.53, stdev=55.95 00:16:08.027 clat percentiles (usec): 00:16:08.027 | 1.00th=[ 202], 5.00th=[ 227], 10.00th=[ 239], 20.00th=[ 258], 00:16:08.027 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 297], 00:16:08.027 | 70.00th=[ 310], 80.00th=[ 330], 90.00th=[ 371], 95.00th=[ 400], 00:16:08.027 | 99.00th=[ 449], 99.50th=[ 486], 99.90th=[ 750], 99.95th=[ 865], 00:16:08.027 | 99.99th=[ 865] 00:16:08.027 bw ( KiB/s): min= 7096, max= 7096, per=24.77%, avg=7096.00, stdev= 0.00, samples=1 00:16:08.027 iops : min= 1774, max= 1774, avg=1774.00, stdev= 0.00, samples=1 00:16:08.027 lat (usec) : 250=9.64%, 500=89.86%, 750=0.43%, 1000=0.07% 00:16:08.027 cpu : usr=1.70%, sys=7.40%, ctx=2786, majf=0, minf=7 00:16:08.027 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:08.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.027 issued rwts: total=1235,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.027 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:08.027 job3: (groupid=0, jobs=1): err= 0: pid=87094: Wed Nov 20 22:36:08 2024 00:16:08.027 read: IOPS=1727, BW=6909KiB/s (7075kB/s)(6916KiB/1001msec) 00:16:08.027 slat (nsec): min=10780, max=67058, avg=16920.49, stdev=5664.04 00:16:08.027 clat (usec): min=158, max=7461, avg=277.26, stdev=188.36 00:16:08.027 lat (usec): min=177, max=7475, avg=294.18, stdev=187.83 00:16:08.027 clat percentiles (usec): 00:16:08.027 | 1.00th=[ 180], 5.00th=[ 190], 10.00th=[ 200], 20.00th=[ 212], 00:16:08.027 | 30.00th=[ 221], 40.00th=[ 231], 50.00th=[ 245], 60.00th=[ 277], 00:16:08.027 | 70.00th=[ 326], 80.00th=[ 343], 90.00th=[ 367], 95.00th=[ 392], 00:16:08.027 | 99.00th=[ 457], 99.50th=[ 482], 99.90th=[ 1385], 99.95th=[ 7439], 00:16:08.027 | 99.99th=[ 7439] 00:16:08.027 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:08.027 slat (usec): min=12, max=129, avg=27.20, stdev= 9.08 00:16:08.027 clat (usec): min=88, max=3243, avg=209.45, stdev=100.87 00:16:08.027 lat (usec): min=141, max=3268, avg=236.65, stdev=100.02 00:16:08.027 clat percentiles (usec): 00:16:08.027 | 1.00th=[ 135], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 163], 00:16:08.027 | 30.00th=[ 172], 40.00th=[ 180], 50.00th=[ 188], 60.00th=[ 200], 00:16:08.027 | 70.00th=[ 219], 80.00th=[ 269], 90.00th=[ 297], 95.00th=[ 314], 00:16:08.027 | 99.00th=[ 351], 99.50th=[ 359], 99.90th=[ 400], 99.95th=[ 2573], 00:16:08.027 | 99.99th=[ 3228] 00:16:08.027 bw ( KiB/s): min= 8952, max= 8952, per=31.25%, avg=8952.00, stdev= 0.00, samples=1 00:16:08.027 iops : min= 2238, max= 2238, avg=2238.00, stdev= 0.00, samples=1 00:16:08.027 lat (usec) : 100=0.03%, 250=66.16%, 500=33.65%, 750=0.03%, 1000=0.03% 00:16:08.027 lat (msec) : 2=0.03%, 4=0.05%, 10=0.03% 00:16:08.027 cpu : usr=1.80%, sys=5.90%, ctx=3781, majf=0, minf=15 00:16:08.027 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:08.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.027 issued rwts: total=1729,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.027 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:08.027 00:16:08.027 Run status group 0 (all jobs): 00:16:08.027 READ: bw=22.7MiB/s (23.9MB/s), 4811KiB/s-6909KiB/s (4927kB/s-7075kB/s), io=22.8MiB (23.9MB), run=1001-1001msec 00:16:08.027 WRITE: bw=28.0MiB/s (29.3MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:16:08.027 00:16:08.027 Disk stats (read/write): 00:16:08.027 nvme0n1: ios=1074/1339, merge=0/0, ticks=400/412, in_queue=812, util=87.58% 00:16:08.027 nvme0n2: ios=1580/1727, merge=0/0, ticks=444/401, in_queue=845, util=89.15% 00:16:08.027 nvme0n3: ios=1041/1364, merge=0/0, ticks=396/423, in_queue=819, util=89.53% 00:16:08.027 nvme0n4: ios=1536/1829, merge=0/0, ticks=409/376, in_queue=785, util=88.95% 00:16:08.027 22:36:08 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:08.027 [global] 00:16:08.027 thread=1 00:16:08.027 invalidate=1 00:16:08.027 rw=randwrite 00:16:08.027 time_based=1 00:16:08.027 runtime=1 00:16:08.027 ioengine=libaio 00:16:08.027 direct=1 00:16:08.027 bs=4096 00:16:08.027 iodepth=1 00:16:08.027 norandommap=0 00:16:08.027 numjobs=1 00:16:08.027 00:16:08.027 verify_dump=1 00:16:08.027 verify_backlog=512 00:16:08.027 verify_state_save=0 00:16:08.027 do_verify=1 00:16:08.027 verify=crc32c-intel 00:16:08.027 [job0] 00:16:08.027 filename=/dev/nvme0n1 00:16:08.027 [job1] 00:16:08.027 filename=/dev/nvme0n2 00:16:08.027 [job2] 00:16:08.027 filename=/dev/nvme0n3 00:16:08.027 [job3] 00:16:08.027 filename=/dev/nvme0n4 00:16:08.027 Could not set queue depth (nvme0n1) 00:16:08.027 Could not set queue depth (nvme0n2) 00:16:08.027 Could not set queue depth (nvme0n3) 00:16:08.027 Could not set queue depth (nvme0n4) 00:16:08.027 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:08.027 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:08.027 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:08.027 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:08.027 fio-3.35 00:16:08.027 Starting 4 threads 00:16:09.422 00:16:09.422 job0: (groupid=0, jobs=1): err= 0: pid=87147: Wed Nov 20 22:36:09 2024 00:16:09.422 read: IOPS=1695, BW=6781KiB/s (6944kB/s)(6788KiB/1001msec) 00:16:09.422 slat (usec): min=8, max=107, avg=16.18, stdev= 7.10 00:16:09.422 clat (usec): min=102, max=2497, avg=295.26, stdev=92.86 00:16:09.422 lat (usec): min=151, max=2513, avg=311.44, stdev=92.34 00:16:09.422 clat percentiles (usec): 00:16:09.422 | 1.00th=[ 165], 5.00th=[ 182], 10.00th=[ 198], 20.00th=[ 219], 00:16:09.422 | 30.00th=[ 245], 40.00th=[ 273], 50.00th=[ 289], 60.00th=[ 310], 00:16:09.422 | 70.00th=[ 334], 80.00th=[ 367], 90.00th=[ 400], 95.00th=[ 424], 00:16:09.422 | 99.00th=[ 461], 99.50th=[ 482], 99.90th=[ 676], 99.95th=[ 2507], 00:16:09.422 | 99.99th=[ 2507] 00:16:09.422 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:09.422 slat (usec): min=11, max=120, avg=22.07, stdev= 7.62 00:16:09.422 clat (usec): min=109, max=394, avg=205.34, stdev=56.78 00:16:09.422 lat (usec): min=130, max=418, avg=227.41, stdev=56.44 00:16:09.422 clat percentiles (usec): 00:16:09.422 | 1.00th=[ 119], 5.00th=[ 133], 10.00th=[ 141], 20.00th=[ 155], 00:16:09.422 | 30.00th=[ 167], 40.00th=[ 178], 50.00th=[ 192], 60.00th=[ 208], 00:16:09.422 | 70.00th=[ 237], 80.00th=[ 262], 90.00th=[ 289], 95.00th=[ 310], 00:16:09.422 | 99.00th=[ 343], 99.50th=[ 359], 99.90th=[ 379], 99.95th=[ 383], 00:16:09.422 | 99.99th=[ 396] 00:16:09.422 bw ( KiB/s): min= 8192, max= 8192, per=24.88%, avg=8192.00, stdev= 0.00, samples=1 00:16:09.422 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:09.422 lat (usec) : 250=55.51%, 500=44.38%, 750=0.08% 00:16:09.422 lat (msec) : 4=0.03% 00:16:09.422 cpu : usr=1.10%, sys=5.70%, ctx=3759, majf=0, minf=13 00:16:09.423 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:09.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.423 issued rwts: total=1697,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.423 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:09.423 job1: (groupid=0, jobs=1): err= 0: pid=87148: Wed Nov 20 22:36:09 2024 00:16:09.423 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:09.423 slat (nsec): min=8829, max=86664, avg=17154.15, stdev=6289.57 00:16:09.423 clat (usec): min=162, max=8080, avg=336.67, stdev=214.42 00:16:09.423 lat (usec): min=176, max=8102, avg=353.82, stdev=214.86 00:16:09.423 clat percentiles (usec): 00:16:09.423 | 1.00th=[ 202], 5.00th=[ 241], 10.00th=[ 255], 20.00th=[ 281], 00:16:09.423 | 30.00th=[ 297], 40.00th=[ 310], 50.00th=[ 330], 60.00th=[ 343], 00:16:09.423 | 70.00th=[ 359], 80.00th=[ 379], 90.00th=[ 408], 95.00th=[ 433], 00:16:09.423 | 99.00th=[ 474], 99.50th=[ 490], 99.90th=[ 1942], 99.95th=[ 8094], 00:16:09.423 | 99.99th=[ 8094] 00:16:09.423 write: IOPS=1687, BW=6749KiB/s (6911kB/s)(6756KiB/1001msec); 0 zone resets 00:16:09.423 slat (nsec): min=11297, max=95020, avg=25475.91, stdev=11058.72 00:16:09.423 clat (usec): min=111, max=553, avg=241.34, stdev=58.63 00:16:09.423 lat (usec): min=130, max=589, avg=266.82, stdev=62.94 00:16:09.423 clat percentiles (usec): 00:16:09.423 | 1.00th=[ 133], 5.00th=[ 151], 10.00th=[ 161], 20.00th=[ 184], 00:16:09.423 | 30.00th=[ 206], 40.00th=[ 227], 50.00th=[ 247], 60.00th=[ 262], 00:16:09.423 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 330], 00:16:09.423 | 99.00th=[ 388], 99.50th=[ 412], 99.90th=[ 474], 99.95th=[ 553], 00:16:09.423 | 99.99th=[ 553] 00:16:09.423 bw ( KiB/s): min= 8192, max= 8192, per=24.88%, avg=8192.00, stdev= 0.00, samples=1 00:16:09.423 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:09.423 lat (usec) : 250=31.44%, 500=68.31%, 750=0.12% 00:16:09.423 lat (msec) : 2=0.09%, 10=0.03% 00:16:09.423 cpu : usr=1.60%, sys=5.00%, ctx=3226, majf=0, minf=15 00:16:09.423 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:09.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.423 issued rwts: total=1536,1689,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.423 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:09.423 job2: (groupid=0, jobs=1): err= 0: pid=87149: Wed Nov 20 22:36:09 2024 00:16:09.423 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:09.423 slat (nsec): min=11315, max=51258, avg=15866.69, stdev=4116.06 00:16:09.423 clat (usec): min=166, max=1272, avg=227.65, stdev=38.41 00:16:09.423 lat (usec): min=184, max=1292, avg=243.52, stdev=38.37 00:16:09.423 clat percentiles (usec): 00:16:09.423 | 1.00th=[ 178], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 204], 00:16:09.423 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 231], 00:16:09.423 | 70.00th=[ 237], 80.00th=[ 247], 90.00th=[ 262], 95.00th=[ 277], 00:16:09.423 | 99.00th=[ 310], 99.50th=[ 326], 99.90th=[ 523], 99.95th=[ 758], 00:16:09.423 | 99.99th=[ 1270] 00:16:09.423 write: IOPS=2288, BW=9155KiB/s (9375kB/s)(9164KiB/1001msec); 0 zone resets 00:16:09.423 slat (nsec): min=16836, max=92462, avg=24858.86, stdev=6820.71 00:16:09.423 clat (usec): min=122, max=459, avg=190.62, stdev=29.35 00:16:09.423 lat (usec): min=141, max=487, avg=215.48, stdev=30.30 00:16:09.423 clat percentiles (usec): 00:16:09.423 | 1.00th=[ 145], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 167], 00:16:09.423 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 194], 00:16:09.423 | 70.00th=[ 200], 80.00th=[ 210], 90.00th=[ 227], 95.00th=[ 241], 00:16:09.423 | 99.00th=[ 289], 99.50th=[ 314], 99.90th=[ 388], 99.95th=[ 404], 00:16:09.423 | 99.99th=[ 461] 00:16:09.423 bw ( KiB/s): min= 9392, max= 9392, per=28.52%, avg=9392.00, stdev= 0.00, samples=1 00:16:09.423 iops : min= 2348, max= 2348, avg=2348.00, stdev= 0.00, samples=1 00:16:09.423 lat (usec) : 250=90.50%, 500=9.43%, 750=0.02%, 1000=0.02% 00:16:09.423 lat (msec) : 2=0.02% 00:16:09.423 cpu : usr=1.20%, sys=7.10%, ctx=4342, majf=0, minf=11 00:16:09.423 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:09.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.423 issued rwts: total=2048,2291,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.423 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:09.423 job3: (groupid=0, jobs=1): err= 0: pid=87150: Wed Nov 20 22:36:09 2024 00:16:09.423 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:09.423 slat (nsec): min=13942, max=69885, avg=19541.92, stdev=5477.51 00:16:09.423 clat (usec): min=157, max=2598, avg=227.51, stdev=88.50 00:16:09.423 lat (usec): min=172, max=2632, avg=247.06, stdev=90.38 00:16:09.423 clat percentiles (usec): 00:16:09.423 | 1.00th=[ 165], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 190], 00:16:09.423 | 30.00th=[ 196], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 219], 00:16:09.423 | 70.00th=[ 229], 80.00th=[ 241], 90.00th=[ 318], 95.00th=[ 355], 00:16:09.423 | 99.00th=[ 416], 99.50th=[ 437], 99.90th=[ 857], 99.95th=[ 2311], 00:16:09.423 | 99.99th=[ 2606] 00:16:09.423 write: IOPS=2210, BW=8843KiB/s (9055kB/s)(8852KiB/1001msec); 0 zone resets 00:16:09.423 slat (nsec): min=20004, max=99737, avg=29557.86, stdev=9029.83 00:16:09.423 clat (usec): min=114, max=478, avg=189.82, stdev=54.31 00:16:09.423 lat (usec): min=135, max=508, avg=219.38, stdev=59.28 00:16:09.423 clat percentiles (usec): 00:16:09.423 | 1.00th=[ 123], 5.00th=[ 133], 10.00th=[ 139], 20.00th=[ 149], 00:16:09.423 | 30.00th=[ 157], 40.00th=[ 165], 50.00th=[ 174], 60.00th=[ 182], 00:16:09.423 | 70.00th=[ 198], 80.00th=[ 233], 90.00th=[ 273], 95.00th=[ 297], 00:16:09.423 | 99.00th=[ 383], 99.50th=[ 396], 99.90th=[ 416], 99.95th=[ 433], 00:16:09.423 | 99.99th=[ 478] 00:16:09.423 bw ( KiB/s): min= 8208, max= 8208, per=24.92%, avg=8208.00, stdev= 0.00, samples=1 00:16:09.423 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:16:09.423 lat (usec) : 250=83.90%, 500=16.03%, 1000=0.02% 00:16:09.423 lat (msec) : 4=0.05% 00:16:09.423 cpu : usr=1.90%, sys=7.70%, ctx=4261, majf=0, minf=7 00:16:09.423 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:09.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.423 issued rwts: total=2048,2213,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.423 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:09.423 00:16:09.423 Run status group 0 (all jobs): 00:16:09.423 READ: bw=28.6MiB/s (30.0MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=28.6MiB (30.0MB), run=1001-1001msec 00:16:09.423 WRITE: bw=32.2MiB/s (33.7MB/s), 6749KiB/s-9155KiB/s (6911kB/s-9375kB/s), io=32.2MiB (33.8MB), run=1001-1001msec 00:16:09.423 00:16:09.423 Disk stats (read/write): 00:16:09.423 nvme0n1: ios=1586/1690, merge=0/0, ticks=486/354, in_queue=840, util=87.78% 00:16:09.423 nvme0n2: ios=1266/1536, merge=0/0, ticks=460/380, in_queue=840, util=88.84% 00:16:09.423 nvme0n3: ios=1726/2048, merge=0/0, ticks=442/420, in_queue=862, util=89.76% 00:16:09.423 nvme0n4: ios=1562/2048, merge=0/0, ticks=376/416, in_queue=792, util=89.59% 00:16:09.423 22:36:09 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:09.423 [global] 00:16:09.423 thread=1 00:16:09.423 invalidate=1 00:16:09.423 rw=write 00:16:09.423 time_based=1 00:16:09.423 runtime=1 00:16:09.423 ioengine=libaio 00:16:09.423 direct=1 00:16:09.423 bs=4096 00:16:09.423 iodepth=128 00:16:09.423 norandommap=0 00:16:09.423 numjobs=1 00:16:09.423 00:16:09.423 verify_dump=1 00:16:09.423 verify_backlog=512 00:16:09.423 verify_state_save=0 00:16:09.423 do_verify=1 00:16:09.423 verify=crc32c-intel 00:16:09.423 [job0] 00:16:09.423 filename=/dev/nvme0n1 00:16:09.423 [job1] 00:16:09.423 filename=/dev/nvme0n2 00:16:09.423 [job2] 00:16:09.423 filename=/dev/nvme0n3 00:16:09.423 [job3] 00:16:09.423 filename=/dev/nvme0n4 00:16:09.423 Could not set queue depth (nvme0n1) 00:16:09.423 Could not set queue depth (nvme0n2) 00:16:09.423 Could not set queue depth (nvme0n3) 00:16:09.423 Could not set queue depth (nvme0n4) 00:16:09.423 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:09.423 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:09.423 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:09.423 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:09.423 fio-3.35 00:16:09.423 Starting 4 threads 00:16:10.800 00:16:10.800 job0: (groupid=0, jobs=1): err= 0: pid=87211: Wed Nov 20 22:36:11 2024 00:16:10.800 read: IOPS=4444, BW=17.4MiB/s (18.2MB/s)(17.4MiB/1005msec) 00:16:10.800 slat (usec): min=4, max=13627, avg=101.65, stdev=587.89 00:16:10.800 clat (usec): min=1832, max=29257, avg=13470.60, stdev=2778.82 00:16:10.800 lat (usec): min=5709, max=29298, avg=13572.25, stdev=2803.28 00:16:10.800 clat percentiles (usec): 00:16:10.800 | 1.00th=[ 8455], 5.00th=[10028], 10.00th=[11076], 20.00th=[11863], 00:16:10.800 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12911], 60.00th=[13304], 00:16:10.800 | 70.00th=[13698], 80.00th=[15008], 90.00th=[16712], 95.00th=[19006], 00:16:10.800 | 99.00th=[24773], 99.50th=[25297], 99.90th=[25560], 99.95th=[25560], 00:16:10.800 | 99.99th=[29230] 00:16:10.800 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:16:10.800 slat (usec): min=12, max=8879, avg=111.06, stdev=635.83 00:16:10.800 clat (usec): min=7623, max=32989, avg=14500.47, stdev=3185.10 00:16:10.800 lat (usec): min=7645, max=33027, avg=14611.53, stdev=3218.57 00:16:10.800 clat percentiles (usec): 00:16:10.800 | 1.00th=[ 8225], 5.00th=[ 9241], 10.00th=[12125], 20.00th=[12780], 00:16:10.800 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13698], 60.00th=[13829], 00:16:10.800 | 70.00th=[14353], 80.00th=[17171], 90.00th=[19006], 95.00th=[20841], 00:16:10.800 | 99.00th=[24511], 99.50th=[24773], 99.90th=[28967], 99.95th=[29230], 00:16:10.800 | 99.99th=[32900] 00:16:10.800 bw ( KiB/s): min=17920, max=18944, per=34.53%, avg=18432.00, stdev=724.08, samples=2 00:16:10.800 iops : min= 4480, max= 4736, avg=4608.00, stdev=181.02, samples=2 00:16:10.800 lat (msec) : 2=0.01%, 10=5.32%, 20=88.35%, 50=6.31% 00:16:10.800 cpu : usr=3.98%, sys=13.75%, ctx=383, majf=0, minf=13 00:16:10.800 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:10.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:10.800 issued rwts: total=4467,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.800 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:10.800 job1: (groupid=0, jobs=1): err= 0: pid=87212: Wed Nov 20 22:36:11 2024 00:16:10.800 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:16:10.800 slat (usec): min=8, max=3631, avg=99.69, stdev=445.59 00:16:10.800 clat (usec): min=9344, max=18023, avg=13206.54, stdev=1143.86 00:16:10.800 lat (usec): min=9941, max=20968, avg=13306.23, stdev=1074.97 00:16:10.800 clat percentiles (usec): 00:16:10.800 | 1.00th=[10290], 5.00th=[11076], 10.00th=[11863], 20.00th=[12387], 00:16:10.800 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:16:10.800 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14615], 95.00th=[14877], 00:16:10.800 | 99.00th=[15795], 99.50th=[16450], 99.90th=[17695], 99.95th=[17695], 00:16:10.800 | 99.99th=[17957] 00:16:10.800 write: IOPS=4798, BW=18.7MiB/s (19.7MB/s)(18.8MiB/1004msec); 0 zone resets 00:16:10.800 slat (usec): min=11, max=3914, avg=104.49, stdev=401.83 00:16:10.800 clat (usec): min=398, max=18545, avg=13665.54, stdev=1786.86 00:16:10.800 lat (usec): min=3909, max=18598, avg=13770.03, stdev=1769.90 00:16:10.800 clat percentiles (usec): 00:16:10.800 | 1.00th=[ 8848], 5.00th=[10945], 10.00th=[11469], 20.00th=[12256], 00:16:10.800 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13829], 60.00th=[14091], 00:16:10.800 | 70.00th=[14484], 80.00th=[14877], 90.00th=[15926], 95.00th=[16581], 00:16:10.800 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18482], 99.95th=[18482], 00:16:10.800 | 99.99th=[18482] 00:16:10.800 bw ( KiB/s): min=17040, max=20480, per=35.14%, avg=18760.00, stdev=2432.45, samples=2 00:16:10.800 iops : min= 4260, max= 5120, avg=4690.00, stdev=608.11, samples=2 00:16:10.800 lat (usec) : 500=0.01% 00:16:10.800 lat (msec) : 4=0.04%, 10=0.93%, 20=99.01% 00:16:10.800 cpu : usr=4.59%, sys=13.86%, ctx=739, majf=0, minf=11 00:16:10.800 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:16:10.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:10.800 issued rwts: total=4608,4818,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.800 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:10.800 job2: (groupid=0, jobs=1): err= 0: pid=87213: Wed Nov 20 22:36:11 2024 00:16:10.800 read: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec) 00:16:10.800 slat (usec): min=4, max=13003, avg=303.57, stdev=1470.10 00:16:10.800 clat (usec): min=26325, max=54835, avg=39740.93, stdev=4499.14 00:16:10.800 lat (usec): min=29457, max=54851, avg=40044.50, stdev=4353.81 00:16:10.800 clat percentiles (usec): 00:16:10.800 | 1.00th=[29754], 5.00th=[34341], 10.00th=[35390], 20.00th=[36439], 00:16:10.800 | 30.00th=[36963], 40.00th=[37487], 50.00th=[38536], 60.00th=[40109], 00:16:10.800 | 70.00th=[42730], 80.00th=[43779], 90.00th=[45351], 95.00th=[46400], 00:16:10.800 | 99.00th=[54789], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:16:10.800 | 99.99th=[54789] 00:16:10.800 write: IOPS=1942, BW=7769KiB/s (7956kB/s)(7808KiB/1005msec); 0 zone resets 00:16:10.800 slat (usec): min=10, max=14104, avg=267.81, stdev=1213.88 00:16:10.801 clat (usec): min=464, max=45158, avg=33009.27, stdev=6796.19 00:16:10.801 lat (usec): min=5005, max=45178, avg=33277.08, stdev=6772.94 00:16:10.801 clat percentiles (usec): 00:16:10.801 | 1.00th=[ 5669], 5.00th=[21627], 10.00th=[25035], 20.00th=[28967], 00:16:10.801 | 30.00th=[30802], 40.00th=[32375], 50.00th=[33424], 60.00th=[34866], 00:16:10.801 | 70.00th=[36963], 80.00th=[39060], 90.00th=[40633], 95.00th=[41157], 00:16:10.801 | 99.00th=[42206], 99.50th=[43254], 99.90th=[45351], 99.95th=[45351], 00:16:10.801 | 99.99th=[45351] 00:16:10.801 bw ( KiB/s): min= 6400, max= 8208, per=13.68%, avg=7304.00, stdev=1278.45, samples=2 00:16:10.801 iops : min= 1600, max= 2052, avg=1826.00, stdev=319.61, samples=2 00:16:10.801 lat (usec) : 500=0.03% 00:16:10.801 lat (msec) : 10=1.03%, 20=1.12%, 50=96.73%, 100=1.09% 00:16:10.801 cpu : usr=1.29%, sys=6.47%, ctx=360, majf=0, minf=17 00:16:10.801 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:16:10.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:10.801 issued rwts: total=1536,1952,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.801 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:10.801 job3: (groupid=0, jobs=1): err= 0: pid=87214: Wed Nov 20 22:36:11 2024 00:16:10.801 read: IOPS=1655, BW=6620KiB/s (6779kB/s)(6660KiB/1006msec) 00:16:10.801 slat (usec): min=4, max=14876, avg=304.23, stdev=1473.33 00:16:10.801 clat (usec): min=492, max=57407, avg=37847.41, stdev=7600.01 00:16:10.801 lat (usec): min=6409, max=57446, avg=38151.64, stdev=7529.47 00:16:10.801 clat percentiles (usec): 00:16:10.801 | 1.00th=[ 6783], 5.00th=[25822], 10.00th=[31851], 20.00th=[35390], 00:16:10.801 | 30.00th=[36439], 40.00th=[36963], 50.00th=[37487], 60.00th=[38011], 00:16:10.801 | 70.00th=[40109], 80.00th=[43779], 90.00th=[46924], 95.00th=[48497], 00:16:10.801 | 99.00th=[53216], 99.50th=[56886], 99.90th=[56886], 99.95th=[57410], 00:16:10.801 | 99.99th=[57410] 00:16:10.801 write: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec); 0 zone resets 00:16:10.801 slat (usec): min=11, max=8969, avg=235.24, stdev=1012.25 00:16:10.801 clat (usec): min=16081, max=54901, avg=31101.96, stdev=7440.43 00:16:10.801 lat (usec): min=16109, max=54975, avg=31337.20, stdev=7456.40 00:16:10.801 clat percentiles (usec): 00:16:10.801 | 1.00th=[17433], 5.00th=[19006], 10.00th=[20841], 20.00th=[23987], 00:16:10.801 | 30.00th=[27132], 40.00th=[30016], 50.00th=[31065], 60.00th=[32637], 00:16:10.801 | 70.00th=[34866], 80.00th=[38011], 90.00th=[39584], 95.00th=[42730], 00:16:10.801 | 99.00th=[49546], 99.50th=[53216], 99.90th=[54264], 99.95th=[54264], 00:16:10.801 | 99.99th=[54789] 00:16:10.801 bw ( KiB/s): min= 8192, max= 8192, per=15.35%, avg=8192.00, stdev= 0.00, samples=2 00:16:10.801 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:16:10.801 lat (usec) : 500=0.03% 00:16:10.801 lat (msec) : 10=0.86%, 20=4.90%, 50=92.38%, 100=1.83% 00:16:10.801 cpu : usr=2.59%, sys=5.87%, ctx=394, majf=0, minf=11 00:16:10.801 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:16:10.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:10.801 issued rwts: total=1665,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.801 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:10.801 00:16:10.801 Run status group 0 (all jobs): 00:16:10.801 READ: bw=47.7MiB/s (50.0MB/s), 6113KiB/s-17.9MiB/s (6260kB/s-18.8MB/s), io=48.0MiB (50.3MB), run=1004-1006msec 00:16:10.801 WRITE: bw=52.1MiB/s (54.7MB/s), 7769KiB/s-18.7MiB/s (7956kB/s-19.7MB/s), io=52.4MiB (55.0MB), run=1004-1006msec 00:16:10.801 00:16:10.801 Disk stats (read/write): 00:16:10.801 nvme0n1: ios=4062/4096, merge=0/0, ticks=24434/24824, in_queue=49258, util=88.58% 00:16:10.801 nvme0n2: ios=4145/4223, merge=0/0, ticks=12676/12436, in_queue=25112, util=90.41% 00:16:10.801 nvme0n3: ios=1447/1536, merge=0/0, ticks=13066/12576, in_queue=25642, util=91.08% 00:16:10.801 nvme0n4: ios=1568/1605, merge=0/0, ticks=14407/11153, in_queue=25560, util=90.32% 00:16:10.801 22:36:11 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:10.801 [global] 00:16:10.801 thread=1 00:16:10.801 invalidate=1 00:16:10.801 rw=randwrite 00:16:10.801 time_based=1 00:16:10.801 runtime=1 00:16:10.801 ioengine=libaio 00:16:10.801 direct=1 00:16:10.801 bs=4096 00:16:10.801 iodepth=128 00:16:10.801 norandommap=0 00:16:10.801 numjobs=1 00:16:10.801 00:16:10.801 verify_dump=1 00:16:10.801 verify_backlog=512 00:16:10.801 verify_state_save=0 00:16:10.801 do_verify=1 00:16:10.801 verify=crc32c-intel 00:16:10.801 [job0] 00:16:10.801 filename=/dev/nvme0n1 00:16:10.801 [job1] 00:16:10.801 filename=/dev/nvme0n2 00:16:10.801 [job2] 00:16:10.801 filename=/dev/nvme0n3 00:16:10.801 [job3] 00:16:10.801 filename=/dev/nvme0n4 00:16:10.801 Could not set queue depth (nvme0n1) 00:16:10.801 Could not set queue depth (nvme0n2) 00:16:10.801 Could not set queue depth (nvme0n3) 00:16:10.801 Could not set queue depth (nvme0n4) 00:16:10.801 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:10.801 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:10.801 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:10.801 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:10.801 fio-3.35 00:16:10.801 Starting 4 threads 00:16:12.178 00:16:12.178 job0: (groupid=0, jobs=1): err= 0: pid=87271: Wed Nov 20 22:36:12 2024 00:16:12.178 read: IOPS=2149, BW=8597KiB/s (8803kB/s)(8640KiB/1005msec) 00:16:12.178 slat (usec): min=4, max=11617, avg=185.77, stdev=963.27 00:16:12.178 clat (usec): min=4380, max=61134, avg=23315.16, stdev=7154.74 00:16:12.178 lat (usec): min=4394, max=61186, avg=23500.93, stdev=7256.52 00:16:12.178 clat percentiles (usec): 00:16:12.178 | 1.00th=[ 4752], 5.00th=[16909], 10.00th=[17695], 20.00th=[19006], 00:16:12.178 | 30.00th=[19792], 40.00th=[20579], 50.00th=[20841], 60.00th=[21890], 00:16:12.178 | 70.00th=[24511], 80.00th=[28181], 90.00th=[32113], 95.00th=[35914], 00:16:12.178 | 99.00th=[51643], 99.50th=[54264], 99.90th=[60031], 99.95th=[60031], 00:16:12.178 | 99.99th=[61080] 00:16:12.178 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:16:12.178 slat (usec): min=5, max=25546, avg=226.07, stdev=1347.54 00:16:12.178 clat (usec): min=13048, max=76257, avg=28556.24, stdev=11964.71 00:16:12.178 lat (usec): min=13067, max=76269, avg=28782.31, stdev=12061.49 00:16:12.178 clat percentiles (usec): 00:16:12.178 | 1.00th=[16057], 5.00th=[17433], 10.00th=[18220], 20.00th=[19268], 00:16:12.178 | 30.00th=[21103], 40.00th=[23200], 50.00th=[25560], 60.00th=[27132], 00:16:12.178 | 70.00th=[30278], 80.00th=[35390], 90.00th=[40109], 95.00th=[60556], 00:16:12.178 | 99.00th=[67634], 99.50th=[67634], 99.90th=[68682], 99.95th=[74974], 00:16:12.178 | 99.99th=[76022] 00:16:12.178 bw ( KiB/s): min= 9392, max=10968, per=18.06%, avg=10180.00, stdev=1114.40, samples=2 00:16:12.178 iops : min= 2348, max= 2742, avg=2545.00, stdev=278.60, samples=2 00:16:12.178 lat (msec) : 10=1.08%, 20=28.28%, 50=66.08%, 100=4.56% 00:16:12.178 cpu : usr=2.09%, sys=6.87%, ctx=488, majf=0, minf=19 00:16:12.178 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:16:12.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:12.178 issued rwts: total=2160,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.178 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:12.178 job1: (groupid=0, jobs=1): err= 0: pid=87273: Wed Nov 20 22:36:12 2024 00:16:12.178 read: IOPS=2017, BW=8071KiB/s (8265kB/s)(8192KiB/1015msec) 00:16:12.178 slat (usec): min=6, max=21896, avg=171.79, stdev=1134.99 00:16:12.178 clat (usec): min=4266, max=72786, avg=19350.05, stdev=10434.46 00:16:12.178 lat (usec): min=4284, max=72802, avg=19521.84, stdev=10546.46 00:16:12.178 clat percentiles (usec): 00:16:12.178 | 1.00th=[ 7373], 5.00th=[ 7701], 10.00th=[ 8979], 20.00th=[ 9503], 00:16:12.178 | 30.00th=[12256], 40.00th=[15270], 50.00th=[17433], 60.00th=[21365], 00:16:12.178 | 70.00th=[22152], 80.00th=[25560], 90.00th=[32113], 95.00th=[36963], 00:16:12.178 | 99.00th=[62653], 99.50th=[68682], 99.90th=[72877], 99.95th=[72877], 00:16:12.178 | 99.99th=[72877] 00:16:12.178 write: IOPS=2488, BW=9955KiB/s (10.2MB/s)(9.87MiB/1015msec); 0 zone resets 00:16:12.178 slat (usec): min=5, max=13706, avg=249.61, stdev=1099.74 00:16:12.178 clat (msec): min=3, max=111, avg=35.23, stdev=25.16 00:16:12.178 lat (msec): min=3, max=111, avg=35.48, stdev=25.30 00:16:12.178 clat percentiles (msec): 00:16:12.178 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 11], 20.00th=[ 18], 00:16:12.178 | 30.00th=[ 19], 40.00th=[ 21], 50.00th=[ 25], 60.00th=[ 32], 00:16:12.178 | 70.00th=[ 46], 80.00th=[ 57], 90.00th=[ 73], 95.00th=[ 87], 00:16:12.178 | 99.00th=[ 105], 99.50th=[ 110], 99.90th=[ 112], 99.95th=[ 112], 00:16:12.178 | 99.99th=[ 112] 00:16:12.178 bw ( KiB/s): min= 8192, max=10992, per=17.02%, avg=9592.00, stdev=1979.90, samples=2 00:16:12.178 iops : min= 2048, max= 2748, avg=2398.00, stdev=494.97, samples=2 00:16:12.178 lat (msec) : 4=0.81%, 10=15.57%, 20=28.82%, 50=39.59%, 100=13.55% 00:16:12.178 lat (msec) : 250=1.66% 00:16:12.178 cpu : usr=2.76%, sys=5.52%, ctx=317, majf=0, minf=7 00:16:12.178 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:16:12.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:12.178 issued rwts: total=2048,2526,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.178 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:12.178 job2: (groupid=0, jobs=1): err= 0: pid=87274: Wed Nov 20 22:36:12 2024 00:16:12.178 read: IOPS=6227, BW=24.3MiB/s (25.5MB/s)(24.5MiB/1007msec) 00:16:12.178 slat (usec): min=4, max=8723, avg=76.30, stdev=501.51 00:16:12.178 clat (usec): min=2555, max=19164, avg=10211.24, stdev=2367.09 00:16:12.178 lat (usec): min=4311, max=19181, avg=10287.54, stdev=2391.46 00:16:12.178 clat percentiles (usec): 00:16:12.178 | 1.00th=[ 6390], 5.00th=[ 7373], 10.00th=[ 7701], 20.00th=[ 8586], 00:16:12.178 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10159], 00:16:12.178 | 70.00th=[11076], 80.00th=[11863], 90.00th=[13173], 95.00th=[14877], 00:16:12.178 | 99.00th=[17957], 99.50th=[18482], 99.90th=[19006], 99.95th=[19006], 00:16:12.178 | 99.99th=[19268] 00:16:12.178 write: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec); 0 zone resets 00:16:12.178 slat (usec): min=5, max=8251, avg=72.21, stdev=476.92 00:16:12.178 clat (usec): min=3760, max=19107, avg=9544.06, stdev=1690.01 00:16:12.178 lat (usec): min=3781, max=19116, avg=9616.27, stdev=1753.23 00:16:12.178 clat percentiles (usec): 00:16:12.178 | 1.00th=[ 4146], 5.00th=[ 5866], 10.00th=[ 7570], 20.00th=[ 8586], 00:16:12.178 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10290], 00:16:12.178 | 70.00th=[10683], 80.00th=[10814], 90.00th=[10945], 95.00th=[11076], 00:16:12.178 | 99.00th=[11469], 99.50th=[15401], 99.90th=[18744], 99.95th=[19006], 00:16:12.178 | 99.99th=[19006] 00:16:12.178 bw ( KiB/s): min=26384, max=26856, per=47.23%, avg=26620.00, stdev=333.75, samples=2 00:16:12.178 iops : min= 6596, max= 6714, avg=6655.00, stdev=83.44, samples=2 00:16:12.178 lat (msec) : 4=0.35%, 10=55.12%, 20=44.53% 00:16:12.178 cpu : usr=5.37%, sys=14.61%, ctx=678, majf=0, minf=15 00:16:12.178 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:16:12.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:12.179 issued rwts: total=6271,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.179 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:12.179 job3: (groupid=0, jobs=1): err= 0: pid=87275: Wed Nov 20 22:36:12 2024 00:16:12.179 read: IOPS=2289, BW=9160KiB/s (9380kB/s)(9224KiB/1007msec) 00:16:12.179 slat (usec): min=4, max=16318, avg=201.71, stdev=1043.67 00:16:12.179 clat (usec): min=6537, max=63046, avg=23987.00, stdev=8488.45 00:16:12.179 lat (usec): min=6563, max=63443, avg=24188.71, stdev=8578.05 00:16:12.179 clat percentiles (usec): 00:16:12.179 | 1.00th=[13698], 5.00th=[15795], 10.00th=[17957], 20.00th=[18482], 00:16:12.179 | 30.00th=[19530], 40.00th=[20317], 50.00th=[20841], 60.00th=[21890], 00:16:12.179 | 70.00th=[24511], 80.00th=[27657], 90.00th=[35914], 95.00th=[46924], 00:16:12.179 | 99.00th=[51119], 99.50th=[56361], 99.90th=[60556], 99.95th=[61604], 00:16:12.179 | 99.99th=[63177] 00:16:12.179 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:16:12.179 slat (usec): min=3, max=23089, avg=201.58, stdev=1142.58 00:16:12.179 clat (usec): min=12052, max=72186, avg=28232.57, stdev=11167.75 00:16:12.179 lat (usec): min=12076, max=72200, avg=28434.15, stdev=11267.52 00:16:12.179 clat percentiles (usec): 00:16:12.179 | 1.00th=[14484], 5.00th=[16909], 10.00th=[17695], 20.00th=[18744], 00:16:12.179 | 30.00th=[21627], 40.00th=[24249], 50.00th=[26084], 60.00th=[27657], 00:16:12.179 | 70.00th=[30278], 80.00th=[33162], 90.00th=[42730], 95.00th=[53216], 00:16:12.179 | 99.00th=[67634], 99.50th=[67634], 99.90th=[67634], 99.95th=[70779], 00:16:12.179 | 99.99th=[71828] 00:16:12.179 bw ( KiB/s): min= 8280, max=12200, per=18.17%, avg=10240.00, stdev=2771.86, samples=2 00:16:12.179 iops : min= 2070, max= 3050, avg=2560.00, stdev=692.96, samples=2 00:16:12.179 lat (msec) : 10=0.04%, 20=30.15%, 50=65.54%, 100=4.27% 00:16:12.179 cpu : usr=3.08%, sys=5.96%, ctx=552, majf=0, minf=5 00:16:12.179 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:16:12.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:12.179 issued rwts: total=2306,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.179 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:12.179 00:16:12.179 Run status group 0 (all jobs): 00:16:12.179 READ: bw=49.2MiB/s (51.6MB/s), 8071KiB/s-24.3MiB/s (8265kB/s-25.5MB/s), io=49.9MiB (52.4MB), run=1005-1015msec 00:16:12.179 WRITE: bw=55.0MiB/s (57.7MB/s), 9955KiB/s-25.8MiB/s (10.2MB/s-27.1MB/s), io=55.9MiB (58.6MB), run=1005-1015msec 00:16:12.179 00:16:12.179 Disk stats (read/write): 00:16:12.179 nvme0n1: ios=1843/2048, merge=0/0, ticks=20227/29053, in_queue=49280, util=86.17% 00:16:12.179 nvme0n2: ios=2097/2127, merge=0/0, ticks=37241/65354, in_queue=102595, util=89.08% 00:16:12.179 nvme0n3: ios=5516/5632, merge=0/0, ticks=51949/50224, in_queue=102173, util=90.53% 00:16:12.179 nvme0n4: ios=2044/2048, merge=0/0, ticks=23993/28647, in_queue=52640, util=90.26% 00:16:12.179 22:36:12 -- target/fio.sh@55 -- # sync 00:16:12.179 22:36:12 -- target/fio.sh@59 -- # fio_pid=87288 00:16:12.179 22:36:12 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:12.179 22:36:12 -- target/fio.sh@61 -- # sleep 3 00:16:12.179 [global] 00:16:12.179 thread=1 00:16:12.179 invalidate=1 00:16:12.179 rw=read 00:16:12.179 time_based=1 00:16:12.179 runtime=10 00:16:12.179 ioengine=libaio 00:16:12.179 direct=1 00:16:12.179 bs=4096 00:16:12.179 iodepth=1 00:16:12.179 norandommap=1 00:16:12.179 numjobs=1 00:16:12.179 00:16:12.179 [job0] 00:16:12.179 filename=/dev/nvme0n1 00:16:12.179 [job1] 00:16:12.179 filename=/dev/nvme0n2 00:16:12.179 [job2] 00:16:12.179 filename=/dev/nvme0n3 00:16:12.179 [job3] 00:16:12.179 filename=/dev/nvme0n4 00:16:12.179 Could not set queue depth (nvme0n1) 00:16:12.179 Could not set queue depth (nvme0n2) 00:16:12.179 Could not set queue depth (nvme0n3) 00:16:12.179 Could not set queue depth (nvme0n4) 00:16:12.179 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:12.179 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:12.179 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:12.179 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:12.179 fio-3.35 00:16:12.179 Starting 4 threads 00:16:15.466 22:36:15 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:15.466 fio: pid=87331, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:15.466 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=19587072, buflen=4096 00:16:15.466 22:36:15 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:15.466 fio: pid=87330, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:15.466 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=58970112, buflen=4096 00:16:15.467 22:36:16 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:15.467 22:36:16 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:15.725 fio: pid=87328, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:15.725 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=61018112, buflen=4096 00:16:15.725 22:36:16 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:15.725 22:36:16 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:15.984 fio: pid=87329, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:15.984 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=51662848, buflen=4096 00:16:15.984 00:16:15.984 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87328: Wed Nov 20 22:36:16 2024 00:16:15.984 read: IOPS=4447, BW=17.4MiB/s (18.2MB/s)(58.2MiB/3350msec) 00:16:15.984 slat (usec): min=6, max=13882, avg=16.85, stdev=162.64 00:16:15.984 clat (usec): min=112, max=3237, avg=206.80, stdev=70.53 00:16:15.984 lat (usec): min=133, max=14172, avg=223.66, stdev=177.99 00:16:15.984 clat percentiles (usec): 00:16:15.984 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 169], 00:16:15.984 | 30.00th=[ 176], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 206], 00:16:15.984 | 70.00th=[ 219], 80.00th=[ 237], 90.00th=[ 265], 95.00th=[ 293], 00:16:15.984 | 99.00th=[ 355], 99.50th=[ 404], 99.90th=[ 791], 99.95th=[ 1319], 00:16:15.984 | 99.99th=[ 2999] 00:16:15.984 bw ( KiB/s): min=16520, max=19072, per=36.27%, avg=18333.33, stdev=910.44, samples=6 00:16:15.984 iops : min= 4130, max= 4768, avg=4583.33, stdev=227.61, samples=6 00:16:15.984 lat (usec) : 250=85.38%, 500=14.32%, 750=0.17%, 1000=0.05% 00:16:15.984 lat (msec) : 2=0.05%, 4=0.03% 00:16:15.984 cpu : usr=1.16%, sys=5.20%, ctx=15056, majf=0, minf=1 00:16:15.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:15.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.984 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.984 issued rwts: total=14898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.984 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:15.984 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87329: Wed Nov 20 22:36:16 2024 00:16:15.984 read: IOPS=3413, BW=13.3MiB/s (14.0MB/s)(49.3MiB/3695msec) 00:16:15.984 slat (usec): min=6, max=11747, avg=20.27, stdev=213.84 00:16:15.984 clat (usec): min=3, max=29980, avg=271.36, stdev=286.86 00:16:15.984 lat (usec): min=131, max=29993, avg=291.63, stdev=357.13 00:16:15.984 clat percentiles (usec): 00:16:15.984 | 1.00th=[ 130], 5.00th=[ 139], 10.00th=[ 153], 20.00th=[ 196], 00:16:15.984 | 30.00th=[ 210], 40.00th=[ 225], 50.00th=[ 241], 60.00th=[ 285], 00:16:15.984 | 70.00th=[ 330], 80.00th=[ 355], 90.00th=[ 388], 95.00th=[ 412], 00:16:15.984 | 99.00th=[ 453], 99.50th=[ 482], 99.90th=[ 1287], 99.95th=[ 2180], 00:16:15.984 | 99.99th=[ 3228] 00:16:15.984 bw ( KiB/s): min=12760, max=17229, per=26.79%, avg=13540.14, stdev=1631.74, samples=7 00:16:15.984 iops : min= 3190, max= 4307, avg=3385.00, stdev=407.84, samples=7 00:16:15.984 lat (usec) : 4=0.02%, 250=53.25%, 500=46.31%, 750=0.18%, 1000=0.07% 00:16:15.984 lat (msec) : 2=0.10%, 4=0.05%, 50=0.01% 00:16:15.984 cpu : usr=0.92%, sys=4.30%, ctx=12802, majf=0, minf=1 00:16:15.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:15.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.984 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.984 issued rwts: total=12614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.984 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:15.984 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87330: Wed Nov 20 22:36:16 2024 00:16:15.984 read: IOPS=4582, BW=17.9MiB/s (18.8MB/s)(56.2MiB/3142msec) 00:16:15.984 slat (usec): min=6, max=11351, avg=14.95, stdev=119.55 00:16:15.984 clat (usec): min=132, max=3053, avg=202.13, stdev=59.18 00:16:15.984 lat (usec): min=144, max=11567, avg=217.08, stdev=134.18 00:16:15.984 clat percentiles (usec): 00:16:15.984 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:16:15.984 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 198], 00:16:15.984 | 70.00th=[ 206], 80.00th=[ 219], 90.00th=[ 237], 95.00th=[ 273], 00:16:15.984 | 99.00th=[ 347], 99.50th=[ 379], 99.90th=[ 930], 99.95th=[ 1287], 00:16:15.984 | 99.99th=[ 2180] 00:16:15.984 bw ( KiB/s): min=16320, max=19360, per=36.92%, avg=18662.67, stdev=1159.09, samples=6 00:16:15.984 iops : min= 4080, max= 4840, avg=4665.67, stdev=289.77, samples=6 00:16:15.984 lat (usec) : 250=93.02%, 500=6.74%, 750=0.09%, 1000=0.06% 00:16:15.984 lat (msec) : 2=0.08%, 4=0.01% 00:16:15.984 cpu : usr=0.80%, sys=5.41%, ctx=14536, majf=0, minf=2 00:16:15.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:15.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.984 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.984 issued rwts: total=14398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.984 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:15.984 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87331: Wed Nov 20 22:36:16 2024 00:16:15.984 read: IOPS=1636, BW=6544KiB/s (6701kB/s)(18.7MiB/2923msec) 00:16:15.984 slat (usec): min=19, max=118, avg=36.69, stdev= 9.46 00:16:15.984 clat (usec): min=182, max=4472, avg=569.88, stdev=92.79 00:16:15.984 lat (usec): min=206, max=4521, avg=606.57, stdev=92.54 00:16:15.984 clat percentiles (usec): 00:16:15.984 | 1.00th=[ 392], 5.00th=[ 469], 10.00th=[ 502], 20.00th=[ 523], 00:16:15.984 | 30.00th=[ 537], 40.00th=[ 553], 50.00th=[ 570], 60.00th=[ 586], 00:16:15.984 | 70.00th=[ 603], 80.00th=[ 619], 90.00th=[ 644], 95.00th=[ 660], 00:16:15.984 | 99.00th=[ 701], 99.50th=[ 725], 99.90th=[ 881], 99.95th=[ 1188], 00:16:15.984 | 99.99th=[ 4490] 00:16:15.984 bw ( KiB/s): min= 6432, max= 6576, per=12.85%, avg=6494.40, stdev=57.80, samples=5 00:16:15.985 iops : min= 1608, max= 1644, avg=1623.60, stdev=14.45, samples=5 00:16:15.985 lat (usec) : 250=0.27%, 500=9.45%, 750=89.99%, 1000=0.21% 00:16:15.985 lat (msec) : 2=0.02%, 4=0.02%, 10=0.02% 00:16:15.985 cpu : usr=1.85%, sys=4.83%, ctx=4783, majf=0, minf=2 00:16:15.985 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:15.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.985 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.985 issued rwts: total=4783,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.985 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:15.985 00:16:15.985 Run status group 0 (all jobs): 00:16:15.985 READ: bw=49.4MiB/s (51.8MB/s), 6544KiB/s-17.9MiB/s (6701kB/s-18.8MB/s), io=182MiB (191MB), run=2923-3695msec 00:16:15.985 00:16:15.985 Disk stats (read/write): 00:16:15.985 nvme0n1: ios=14089/0, merge=0/0, ticks=2970/0, in_queue=2970, util=95.66% 00:16:15.985 nvme0n2: ios=12342/0, merge=0/0, ticks=3435/0, in_queue=3435, util=95.32% 00:16:15.985 nvme0n3: ios=14338/0, merge=0/0, ticks=2963/0, in_queue=2963, util=96.24% 00:16:15.985 nvme0n4: ios=4671/0, merge=0/0, ticks=2710/0, in_queue=2710, util=96.76% 00:16:15.985 22:36:16 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:15.985 22:36:16 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:16.243 22:36:16 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:16.243 22:36:16 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:16.501 22:36:17 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:16.501 22:36:17 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:16.759 22:36:17 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:16.759 22:36:17 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:17.325 22:36:17 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:17.325 22:36:17 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:17.325 22:36:18 -- target/fio.sh@69 -- # fio_status=0 00:16:17.325 22:36:18 -- target/fio.sh@70 -- # wait 87288 00:16:17.325 22:36:18 -- target/fio.sh@70 -- # fio_status=4 00:16:17.325 22:36:18 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:17.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.325 22:36:18 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:17.325 22:36:18 -- common/autotest_common.sh@1208 -- # local i=0 00:16:17.325 22:36:18 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:17.325 22:36:18 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:17.325 22:36:18 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:17.325 22:36:18 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:17.583 nvmf hotplug test: fio failed as expected 00:16:17.583 22:36:18 -- common/autotest_common.sh@1220 -- # return 0 00:16:17.583 22:36:18 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:17.583 22:36:18 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:17.583 22:36:18 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:17.842 22:36:18 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:17.842 22:36:18 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:17.842 22:36:18 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:17.842 22:36:18 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:17.842 22:36:18 -- target/fio.sh@91 -- # nvmftestfini 00:16:17.842 22:36:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:17.842 22:36:18 -- nvmf/common.sh@116 -- # sync 00:16:17.842 22:36:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:17.842 22:36:18 -- nvmf/common.sh@119 -- # set +e 00:16:17.842 22:36:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:17.842 22:36:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:17.842 rmmod nvme_tcp 00:16:17.842 rmmod nvme_fabrics 00:16:17.842 rmmod nvme_keyring 00:16:17.842 22:36:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:17.842 22:36:18 -- nvmf/common.sh@123 -- # set -e 00:16:17.842 22:36:18 -- nvmf/common.sh@124 -- # return 0 00:16:17.842 22:36:18 -- nvmf/common.sh@477 -- # '[' -n 86792 ']' 00:16:17.842 22:36:18 -- nvmf/common.sh@478 -- # killprocess 86792 00:16:17.842 22:36:18 -- common/autotest_common.sh@936 -- # '[' -z 86792 ']' 00:16:17.842 22:36:18 -- common/autotest_common.sh@940 -- # kill -0 86792 00:16:17.842 22:36:18 -- common/autotest_common.sh@941 -- # uname 00:16:17.842 22:36:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:17.842 22:36:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86792 00:16:17.842 killing process with pid 86792 00:16:17.842 22:36:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:17.842 22:36:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:17.842 22:36:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86792' 00:16:17.842 22:36:18 -- common/autotest_common.sh@955 -- # kill 86792 00:16:17.842 22:36:18 -- common/autotest_common.sh@960 -- # wait 86792 00:16:18.100 22:36:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:18.100 22:36:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:18.100 22:36:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:18.100 22:36:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:18.100 22:36:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:18.100 22:36:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.100 22:36:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:18.100 22:36:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.100 22:36:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:18.100 00:16:18.100 real 0m19.806s 00:16:18.100 user 1m16.248s 00:16:18.100 sys 0m8.238s 00:16:18.100 22:36:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:18.100 22:36:18 -- common/autotest_common.sh@10 -- # set +x 00:16:18.100 ************************************ 00:16:18.100 END TEST nvmf_fio_target 00:16:18.100 ************************************ 00:16:18.100 22:36:18 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:18.100 22:36:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:18.100 22:36:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:18.100 22:36:18 -- common/autotest_common.sh@10 -- # set +x 00:16:18.100 ************************************ 00:16:18.100 START TEST nvmf_bdevio 00:16:18.100 ************************************ 00:16:18.100 22:36:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:18.359 * Looking for test storage... 00:16:18.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:18.359 22:36:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:18.359 22:36:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:18.359 22:36:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:18.359 22:36:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:18.359 22:36:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:18.359 22:36:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:18.359 22:36:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:18.359 22:36:18 -- scripts/common.sh@335 -- # IFS=.-: 00:16:18.359 22:36:18 -- scripts/common.sh@335 -- # read -ra ver1 00:16:18.359 22:36:18 -- scripts/common.sh@336 -- # IFS=.-: 00:16:18.359 22:36:18 -- scripts/common.sh@336 -- # read -ra ver2 00:16:18.359 22:36:18 -- scripts/common.sh@337 -- # local 'op=<' 00:16:18.359 22:36:18 -- scripts/common.sh@339 -- # ver1_l=2 00:16:18.359 22:36:18 -- scripts/common.sh@340 -- # ver2_l=1 00:16:18.359 22:36:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:18.359 22:36:18 -- scripts/common.sh@343 -- # case "$op" in 00:16:18.359 22:36:18 -- scripts/common.sh@344 -- # : 1 00:16:18.359 22:36:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:18.359 22:36:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:18.359 22:36:18 -- scripts/common.sh@364 -- # decimal 1 00:16:18.359 22:36:18 -- scripts/common.sh@352 -- # local d=1 00:16:18.359 22:36:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:18.359 22:36:18 -- scripts/common.sh@354 -- # echo 1 00:16:18.359 22:36:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:18.359 22:36:18 -- scripts/common.sh@365 -- # decimal 2 00:16:18.359 22:36:18 -- scripts/common.sh@352 -- # local d=2 00:16:18.359 22:36:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:18.360 22:36:18 -- scripts/common.sh@354 -- # echo 2 00:16:18.360 22:36:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:18.360 22:36:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:18.360 22:36:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:18.360 22:36:18 -- scripts/common.sh@367 -- # return 0 00:16:18.360 22:36:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:18.360 22:36:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:18.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.360 --rc genhtml_branch_coverage=1 00:16:18.360 --rc genhtml_function_coverage=1 00:16:18.360 --rc genhtml_legend=1 00:16:18.360 --rc geninfo_all_blocks=1 00:16:18.360 --rc geninfo_unexecuted_blocks=1 00:16:18.360 00:16:18.360 ' 00:16:18.360 22:36:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:18.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.360 --rc genhtml_branch_coverage=1 00:16:18.360 --rc genhtml_function_coverage=1 00:16:18.360 --rc genhtml_legend=1 00:16:18.360 --rc geninfo_all_blocks=1 00:16:18.360 --rc geninfo_unexecuted_blocks=1 00:16:18.360 00:16:18.360 ' 00:16:18.360 22:36:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:18.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.360 --rc genhtml_branch_coverage=1 00:16:18.360 --rc genhtml_function_coverage=1 00:16:18.360 --rc genhtml_legend=1 00:16:18.360 --rc geninfo_all_blocks=1 00:16:18.360 --rc geninfo_unexecuted_blocks=1 00:16:18.360 00:16:18.360 ' 00:16:18.360 22:36:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:18.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.360 --rc genhtml_branch_coverage=1 00:16:18.360 --rc genhtml_function_coverage=1 00:16:18.360 --rc genhtml_legend=1 00:16:18.360 --rc geninfo_all_blocks=1 00:16:18.360 --rc geninfo_unexecuted_blocks=1 00:16:18.360 00:16:18.360 ' 00:16:18.360 22:36:18 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:18.360 22:36:19 -- nvmf/common.sh@7 -- # uname -s 00:16:18.360 22:36:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:18.360 22:36:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:18.360 22:36:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:18.360 22:36:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:18.360 22:36:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:18.360 22:36:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:18.360 22:36:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:18.360 22:36:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:18.360 22:36:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:18.360 22:36:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:18.360 22:36:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:16:18.360 22:36:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:16:18.360 22:36:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:18.360 22:36:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:18.360 22:36:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:18.360 22:36:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:18.360 22:36:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:18.360 22:36:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:18.360 22:36:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:18.360 22:36:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.360 22:36:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.360 22:36:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.360 22:36:19 -- paths/export.sh@5 -- # export PATH 00:16:18.360 22:36:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.360 22:36:19 -- nvmf/common.sh@46 -- # : 0 00:16:18.360 22:36:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:18.360 22:36:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:18.360 22:36:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:18.360 22:36:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:18.360 22:36:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:18.360 22:36:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:18.360 22:36:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:18.360 22:36:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:18.360 22:36:19 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:18.360 22:36:19 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:18.360 22:36:19 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:18.360 22:36:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:18.360 22:36:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:18.360 22:36:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:18.360 22:36:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:18.360 22:36:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:18.360 22:36:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.360 22:36:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:18.360 22:36:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.360 22:36:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:18.360 22:36:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:18.360 22:36:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:18.360 22:36:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:18.360 22:36:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:18.360 22:36:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:18.360 22:36:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:18.360 22:36:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:18.360 22:36:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:18.360 22:36:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:18.360 22:36:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:18.360 22:36:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:18.360 22:36:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:18.360 22:36:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:18.360 22:36:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:18.360 22:36:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:18.360 22:36:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:18.360 22:36:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:18.360 22:36:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:18.360 22:36:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:18.360 Cannot find device "nvmf_tgt_br" 00:16:18.360 22:36:19 -- nvmf/common.sh@154 -- # true 00:16:18.360 22:36:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:18.360 Cannot find device "nvmf_tgt_br2" 00:16:18.360 22:36:19 -- nvmf/common.sh@155 -- # true 00:16:18.360 22:36:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:18.360 22:36:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:18.632 Cannot find device "nvmf_tgt_br" 00:16:18.632 22:36:19 -- nvmf/common.sh@157 -- # true 00:16:18.632 22:36:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:18.632 Cannot find device "nvmf_tgt_br2" 00:16:18.632 22:36:19 -- nvmf/common.sh@158 -- # true 00:16:18.632 22:36:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:18.632 22:36:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:18.632 22:36:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:18.632 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:18.632 22:36:19 -- nvmf/common.sh@161 -- # true 00:16:18.632 22:36:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:18.632 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:18.632 22:36:19 -- nvmf/common.sh@162 -- # true 00:16:18.632 22:36:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:18.632 22:36:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:18.632 22:36:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:18.632 22:36:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:18.632 22:36:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:18.632 22:36:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:18.632 22:36:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:18.632 22:36:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:18.632 22:36:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:18.632 22:36:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:18.632 22:36:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:18.632 22:36:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:18.632 22:36:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:18.632 22:36:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:18.632 22:36:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:18.632 22:36:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:18.632 22:36:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:18.632 22:36:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:18.632 22:36:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:18.632 22:36:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:18.891 22:36:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:18.891 22:36:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:18.891 22:36:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:18.891 22:36:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:18.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:18.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:16:18.891 00:16:18.891 --- 10.0.0.2 ping statistics --- 00:16:18.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.892 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:16:18.892 22:36:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:18.892 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:18.892 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:16:18.892 00:16:18.892 --- 10.0.0.3 ping statistics --- 00:16:18.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.892 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:18.892 22:36:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:18.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:18.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:18.892 00:16:18.892 --- 10.0.0.1 ping statistics --- 00:16:18.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.892 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:18.892 22:36:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:18.892 22:36:19 -- nvmf/common.sh@421 -- # return 0 00:16:18.892 22:36:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:18.892 22:36:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:18.892 22:36:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:18.892 22:36:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:18.892 22:36:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:18.892 22:36:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:18.892 22:36:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:18.892 22:36:19 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:18.892 22:36:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:18.892 22:36:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:18.892 22:36:19 -- common/autotest_common.sh@10 -- # set +x 00:16:18.892 22:36:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:18.892 22:36:19 -- nvmf/common.sh@469 -- # nvmfpid=87665 00:16:18.892 22:36:19 -- nvmf/common.sh@470 -- # waitforlisten 87665 00:16:18.892 22:36:19 -- common/autotest_common.sh@829 -- # '[' -z 87665 ']' 00:16:18.892 22:36:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.892 22:36:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:18.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.892 22:36:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.892 22:36:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:18.892 22:36:19 -- common/autotest_common.sh@10 -- # set +x 00:16:18.892 [2024-11-20 22:36:19.482120] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:18.892 [2024-11-20 22:36:19.482197] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.892 [2024-11-20 22:36:19.617454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:19.150 [2024-11-20 22:36:19.681071] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:19.150 [2024-11-20 22:36:19.681233] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.150 [2024-11-20 22:36:19.681270] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.151 [2024-11-20 22:36:19.681295] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.151 [2024-11-20 22:36:19.681928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:19.151 [2024-11-20 22:36:19.682015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:19.151 [2024-11-20 22:36:19.682186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:19.151 [2024-11-20 22:36:19.682193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:20.087 22:36:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:20.087 22:36:20 -- common/autotest_common.sh@862 -- # return 0 00:16:20.087 22:36:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:20.087 22:36:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:20.087 22:36:20 -- common/autotest_common.sh@10 -- # set +x 00:16:20.087 22:36:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.087 22:36:20 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:20.087 22:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.087 22:36:20 -- common/autotest_common.sh@10 -- # set +x 00:16:20.087 [2024-11-20 22:36:20.553165] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:20.087 22:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.087 22:36:20 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:20.087 22:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.087 22:36:20 -- common/autotest_common.sh@10 -- # set +x 00:16:20.087 Malloc0 00:16:20.087 22:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.087 22:36:20 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:20.087 22:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.087 22:36:20 -- common/autotest_common.sh@10 -- # set +x 00:16:20.087 22:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.087 22:36:20 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:20.087 22:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.087 22:36:20 -- common/autotest_common.sh@10 -- # set +x 00:16:20.087 22:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.087 22:36:20 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:20.087 22:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.087 22:36:20 -- common/autotest_common.sh@10 -- # set +x 00:16:20.087 [2024-11-20 22:36:20.632821] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.087 22:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.087 22:36:20 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:20.087 22:36:20 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:20.087 22:36:20 -- nvmf/common.sh@520 -- # config=() 00:16:20.087 22:36:20 -- nvmf/common.sh@520 -- # local subsystem config 00:16:20.087 22:36:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:20.087 22:36:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:20.087 { 00:16:20.087 "params": { 00:16:20.087 "name": "Nvme$subsystem", 00:16:20.087 "trtype": "$TEST_TRANSPORT", 00:16:20.087 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:20.087 "adrfam": "ipv4", 00:16:20.087 "trsvcid": "$NVMF_PORT", 00:16:20.087 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:20.087 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:20.087 "hdgst": ${hdgst:-false}, 00:16:20.087 "ddgst": ${ddgst:-false} 00:16:20.087 }, 00:16:20.087 "method": "bdev_nvme_attach_controller" 00:16:20.087 } 00:16:20.087 EOF 00:16:20.087 )") 00:16:20.087 22:36:20 -- nvmf/common.sh@542 -- # cat 00:16:20.087 22:36:20 -- nvmf/common.sh@544 -- # jq . 00:16:20.087 22:36:20 -- nvmf/common.sh@545 -- # IFS=, 00:16:20.087 22:36:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:20.087 "params": { 00:16:20.087 "name": "Nvme1", 00:16:20.087 "trtype": "tcp", 00:16:20.087 "traddr": "10.0.0.2", 00:16:20.087 "adrfam": "ipv4", 00:16:20.087 "trsvcid": "4420", 00:16:20.087 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:20.087 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:20.087 "hdgst": false, 00:16:20.087 "ddgst": false 00:16:20.087 }, 00:16:20.087 "method": "bdev_nvme_attach_controller" 00:16:20.087 }' 00:16:20.087 [2024-11-20 22:36:20.692058] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:20.087 [2024-11-20 22:36:20.692173] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87719 ] 00:16:20.346 [2024-11-20 22:36:20.834477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:20.346 [2024-11-20 22:36:20.916237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.346 [2024-11-20 22:36:20.917321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.346 [2024-11-20 22:36:20.917345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.605 [2024-11-20 22:36:21.119422] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:20.605 [2024-11-20 22:36:21.119481] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:20.605 I/O targets: 00:16:20.605 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:20.605 00:16:20.605 00:16:20.605 CUnit - A unit testing framework for C - Version 2.1-3 00:16:20.605 http://cunit.sourceforge.net/ 00:16:20.605 00:16:20.605 00:16:20.605 Suite: bdevio tests on: Nvme1n1 00:16:20.605 Test: blockdev write read block ...passed 00:16:20.605 Test: blockdev write zeroes read block ...passed 00:16:20.605 Test: blockdev write zeroes read no split ...passed 00:16:20.605 Test: blockdev write zeroes read split ...passed 00:16:20.605 Test: blockdev write zeroes read split partial ...passed 00:16:20.605 Test: blockdev reset ...[2024-11-20 22:36:21.241956] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:20.605 [2024-11-20 22:36:21.242094] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d7ee0 (9): Bad file descriptor 00:16:20.605 [2024-11-20 22:36:21.259218] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:20.605 passed 00:16:20.605 Test: blockdev write read 8 blocks ...passed 00:16:20.605 Test: blockdev write read size > 128k ...passed 00:16:20.605 Test: blockdev write read invalid size ...passed 00:16:20.605 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:20.605 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:20.605 Test: blockdev write read max offset ...passed 00:16:20.863 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:20.863 Test: blockdev writev readv 8 blocks ...passed 00:16:20.863 Test: blockdev writev readv 30 x 1block ...passed 00:16:20.863 Test: blockdev writev readv block ...passed 00:16:20.863 Test: blockdev writev readv size > 128k ...passed 00:16:20.863 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:20.863 Test: blockdev comparev and writev ...[2024-11-20 22:36:21.434540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:20.864 [2024-11-20 22:36:21.434594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:20.864 [2024-11-20 22:36:21.434640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:20.864 [2024-11-20 22:36:21.434650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.864 [2024-11-20 22:36:21.435080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:20.864 [2024-11-20 22:36:21.435107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:20.864 [2024-11-20 22:36:21.435124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:20.864 [2024-11-20 22:36:21.435133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:20.864 [2024-11-20 22:36:21.435512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:20.864 [2024-11-20 22:36:21.435541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:20.864 [2024-11-20 22:36:21.435559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:20.864 [2024-11-20 22:36:21.435568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:20.864 [2024-11-20 22:36:21.435931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:20.864 [2024-11-20 22:36:21.435960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:20.864 [2024-11-20 22:36:21.435976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:20.864 [2024-11-20 22:36:21.435985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:20.864 passed 00:16:20.864 Test: blockdev nvme passthru rw ...passed 00:16:20.864 Test: blockdev nvme passthru vendor specific ...passed 00:16:20.864 Test: blockdev nvme admin passthru ...[2024-11-20 22:36:21.518654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:20.864 [2024-11-20 22:36:21.518686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:20.864 [2024-11-20 22:36:21.518837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:20.864 [2024-11-20 22:36:21.518852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:20.864 [2024-11-20 22:36:21.518979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:20.864 [2024-11-20 22:36:21.518994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:20.864 [2024-11-20 22:36:21.519112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:20.864 [2024-11-20 22:36:21.519126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:20.864 passed 00:16:20.864 Test: blockdev copy ...passed 00:16:20.864 00:16:20.864 Run Summary: Type Total Ran Passed Failed Inactive 00:16:20.864 suites 1 1 n/a 0 0 00:16:20.864 tests 23 23 23 0 0 00:16:20.864 asserts 152 152 152 0 n/a 00:16:20.864 00:16:20.864 Elapsed time = 0.898 seconds 00:16:21.122 22:36:21 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:21.122 22:36:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.122 22:36:21 -- common/autotest_common.sh@10 -- # set +x 00:16:21.122 22:36:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.122 22:36:21 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:21.122 22:36:21 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:21.122 22:36:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:21.122 22:36:21 -- nvmf/common.sh@116 -- # sync 00:16:21.381 22:36:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:21.381 22:36:21 -- nvmf/common.sh@119 -- # set +e 00:16:21.381 22:36:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:21.381 22:36:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:21.381 rmmod nvme_tcp 00:16:21.381 rmmod nvme_fabrics 00:16:21.381 rmmod nvme_keyring 00:16:21.381 22:36:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:21.381 22:36:21 -- nvmf/common.sh@123 -- # set -e 00:16:21.381 22:36:21 -- nvmf/common.sh@124 -- # return 0 00:16:21.381 22:36:21 -- nvmf/common.sh@477 -- # '[' -n 87665 ']' 00:16:21.381 22:36:21 -- nvmf/common.sh@478 -- # killprocess 87665 00:16:21.381 22:36:21 -- common/autotest_common.sh@936 -- # '[' -z 87665 ']' 00:16:21.381 22:36:21 -- common/autotest_common.sh@940 -- # kill -0 87665 00:16:21.381 22:36:21 -- common/autotest_common.sh@941 -- # uname 00:16:21.381 22:36:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:21.381 22:36:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87665 00:16:21.381 killing process with pid 87665 00:16:21.381 22:36:22 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:21.381 22:36:22 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:21.381 22:36:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87665' 00:16:21.381 22:36:22 -- common/autotest_common.sh@955 -- # kill 87665 00:16:21.381 22:36:22 -- common/autotest_common.sh@960 -- # wait 87665 00:16:21.640 22:36:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:21.640 22:36:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:21.640 22:36:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:21.640 22:36:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:21.640 22:36:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:21.640 22:36:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.640 22:36:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.640 22:36:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.899 22:36:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:21.899 00:16:21.899 real 0m3.558s 00:16:21.899 user 0m12.792s 00:16:21.899 sys 0m0.873s 00:16:21.899 22:36:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:21.899 22:36:22 -- common/autotest_common.sh@10 -- # set +x 00:16:21.899 ************************************ 00:16:21.899 END TEST nvmf_bdevio 00:16:21.899 ************************************ 00:16:21.899 22:36:22 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:21.899 22:36:22 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:21.899 22:36:22 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:21.899 22:36:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:21.899 22:36:22 -- common/autotest_common.sh@10 -- # set +x 00:16:21.899 ************************************ 00:16:21.899 START TEST nvmf_bdevio_no_huge 00:16:21.899 ************************************ 00:16:21.899 22:36:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:21.899 * Looking for test storage... 00:16:21.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:21.899 22:36:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:21.899 22:36:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:21.899 22:36:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:21.899 22:36:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:21.899 22:36:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:21.899 22:36:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:21.899 22:36:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:21.899 22:36:22 -- scripts/common.sh@335 -- # IFS=.-: 00:16:21.899 22:36:22 -- scripts/common.sh@335 -- # read -ra ver1 00:16:21.899 22:36:22 -- scripts/common.sh@336 -- # IFS=.-: 00:16:21.899 22:36:22 -- scripts/common.sh@336 -- # read -ra ver2 00:16:21.899 22:36:22 -- scripts/common.sh@337 -- # local 'op=<' 00:16:21.899 22:36:22 -- scripts/common.sh@339 -- # ver1_l=2 00:16:21.899 22:36:22 -- scripts/common.sh@340 -- # ver2_l=1 00:16:21.899 22:36:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:21.899 22:36:22 -- scripts/common.sh@343 -- # case "$op" in 00:16:21.899 22:36:22 -- scripts/common.sh@344 -- # : 1 00:16:21.899 22:36:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:21.899 22:36:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:21.899 22:36:22 -- scripts/common.sh@364 -- # decimal 1 00:16:21.899 22:36:22 -- scripts/common.sh@352 -- # local d=1 00:16:21.899 22:36:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:21.899 22:36:22 -- scripts/common.sh@354 -- # echo 1 00:16:21.899 22:36:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:21.899 22:36:22 -- scripts/common.sh@365 -- # decimal 2 00:16:21.899 22:36:22 -- scripts/common.sh@352 -- # local d=2 00:16:21.899 22:36:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:21.899 22:36:22 -- scripts/common.sh@354 -- # echo 2 00:16:21.899 22:36:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:21.899 22:36:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:21.899 22:36:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:21.899 22:36:22 -- scripts/common.sh@367 -- # return 0 00:16:21.899 22:36:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:21.899 22:36:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:21.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.899 --rc genhtml_branch_coverage=1 00:16:21.899 --rc genhtml_function_coverage=1 00:16:21.899 --rc genhtml_legend=1 00:16:21.899 --rc geninfo_all_blocks=1 00:16:21.899 --rc geninfo_unexecuted_blocks=1 00:16:21.899 00:16:21.899 ' 00:16:21.899 22:36:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:21.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.899 --rc genhtml_branch_coverage=1 00:16:21.899 --rc genhtml_function_coverage=1 00:16:21.899 --rc genhtml_legend=1 00:16:21.899 --rc geninfo_all_blocks=1 00:16:21.899 --rc geninfo_unexecuted_blocks=1 00:16:21.899 00:16:21.899 ' 00:16:21.899 22:36:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:21.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.899 --rc genhtml_branch_coverage=1 00:16:21.899 --rc genhtml_function_coverage=1 00:16:21.899 --rc genhtml_legend=1 00:16:21.899 --rc geninfo_all_blocks=1 00:16:21.899 --rc geninfo_unexecuted_blocks=1 00:16:21.899 00:16:21.899 ' 00:16:21.899 22:36:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:21.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.899 --rc genhtml_branch_coverage=1 00:16:21.899 --rc genhtml_function_coverage=1 00:16:21.899 --rc genhtml_legend=1 00:16:21.899 --rc geninfo_all_blocks=1 00:16:21.899 --rc geninfo_unexecuted_blocks=1 00:16:21.899 00:16:21.899 ' 00:16:21.899 22:36:22 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:21.899 22:36:22 -- nvmf/common.sh@7 -- # uname -s 00:16:21.899 22:36:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.899 22:36:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.899 22:36:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.899 22:36:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.899 22:36:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.899 22:36:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.899 22:36:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.900 22:36:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.900 22:36:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.900 22:36:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.900 22:36:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:16:21.900 22:36:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:16:21.900 22:36:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.900 22:36:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.900 22:36:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:21.900 22:36:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:21.900 22:36:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.900 22:36:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.900 22:36:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.900 22:36:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.900 22:36:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.900 22:36:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.900 22:36:22 -- paths/export.sh@5 -- # export PATH 00:16:21.900 22:36:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.900 22:36:22 -- nvmf/common.sh@46 -- # : 0 00:16:21.900 22:36:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:21.900 22:36:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:21.900 22:36:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:21.900 22:36:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.900 22:36:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.900 22:36:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:21.900 22:36:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:21.900 22:36:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:21.900 22:36:22 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:21.900 22:36:22 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:21.900 22:36:22 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:21.900 22:36:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:21.900 22:36:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:21.900 22:36:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:21.900 22:36:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:21.900 22:36:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:21.900 22:36:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.900 22:36:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.900 22:36:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.900 22:36:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:21.900 22:36:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:21.900 22:36:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:21.900 22:36:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:21.900 22:36:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:21.900 22:36:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:21.900 22:36:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:21.900 22:36:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:21.900 22:36:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:21.900 22:36:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:21.900 22:36:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:21.900 22:36:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:21.900 22:36:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:21.900 22:36:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:21.900 22:36:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:21.900 22:36:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:21.900 22:36:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:21.900 22:36:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:21.900 22:36:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:22.159 22:36:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:22.159 Cannot find device "nvmf_tgt_br" 00:16:22.159 22:36:22 -- nvmf/common.sh@154 -- # true 00:16:22.159 22:36:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:22.159 Cannot find device "nvmf_tgt_br2" 00:16:22.159 22:36:22 -- nvmf/common.sh@155 -- # true 00:16:22.159 22:36:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:22.159 22:36:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:22.159 Cannot find device "nvmf_tgt_br" 00:16:22.159 22:36:22 -- nvmf/common.sh@157 -- # true 00:16:22.159 22:36:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:22.159 Cannot find device "nvmf_tgt_br2" 00:16:22.159 22:36:22 -- nvmf/common.sh@158 -- # true 00:16:22.159 22:36:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:22.159 22:36:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:22.159 22:36:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:22.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.159 22:36:22 -- nvmf/common.sh@161 -- # true 00:16:22.159 22:36:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:22.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.159 22:36:22 -- nvmf/common.sh@162 -- # true 00:16:22.159 22:36:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:22.159 22:36:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:22.159 22:36:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:22.159 22:36:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:22.159 22:36:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:22.159 22:36:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:22.159 22:36:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:22.159 22:36:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:22.159 22:36:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:22.159 22:36:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:22.159 22:36:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:22.159 22:36:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:22.159 22:36:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:22.159 22:36:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:22.159 22:36:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:22.159 22:36:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:22.159 22:36:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:22.159 22:36:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:22.159 22:36:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:22.159 22:36:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:22.418 22:36:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:22.418 22:36:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:22.418 22:36:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:22.418 22:36:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:22.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:16:22.418 00:16:22.418 --- 10.0.0.2 ping statistics --- 00:16:22.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.418 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:22.418 22:36:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:22.418 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:22.418 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:16:22.418 00:16:22.418 --- 10.0.0.3 ping statistics --- 00:16:22.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.418 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:16:22.418 22:36:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:22.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:22.418 00:16:22.418 --- 10.0.0.1 ping statistics --- 00:16:22.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.418 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:22.418 22:36:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.418 22:36:22 -- nvmf/common.sh@421 -- # return 0 00:16:22.418 22:36:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:22.418 22:36:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.418 22:36:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:22.418 22:36:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:22.418 22:36:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.418 22:36:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:22.418 22:36:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:22.418 22:36:22 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:22.418 22:36:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:22.418 22:36:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:22.418 22:36:22 -- common/autotest_common.sh@10 -- # set +x 00:16:22.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.418 22:36:22 -- nvmf/common.sh@469 -- # nvmfpid=87911 00:16:22.418 22:36:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:22.418 22:36:22 -- nvmf/common.sh@470 -- # waitforlisten 87911 00:16:22.418 22:36:22 -- common/autotest_common.sh@829 -- # '[' -z 87911 ']' 00:16:22.418 22:36:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.418 22:36:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:22.418 22:36:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.418 22:36:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:22.418 22:36:22 -- common/autotest_common.sh@10 -- # set +x 00:16:22.418 [2024-11-20 22:36:22.985309] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:22.418 [2024-11-20 22:36:22.985621] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:22.418 [2024-11-20 22:36:23.122408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:22.676 [2024-11-20 22:36:23.245945] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:22.676 [2024-11-20 22:36:23.246356] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.676 [2024-11-20 22:36:23.246513] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.676 [2024-11-20 22:36:23.246766] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.676 [2024-11-20 22:36:23.247066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:22.676 [2024-11-20 22:36:23.247212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:22.676 [2024-11-20 22:36:23.247352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:22.676 [2024-11-20 22:36:23.247363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:23.242 22:36:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:23.242 22:36:23 -- common/autotest_common.sh@862 -- # return 0 00:16:23.242 22:36:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:23.242 22:36:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:23.242 22:36:23 -- common/autotest_common.sh@10 -- # set +x 00:16:23.242 22:36:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.242 22:36:23 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:23.242 22:36:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.242 22:36:23 -- common/autotest_common.sh@10 -- # set +x 00:16:23.500 [2024-11-20 22:36:23.977892] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:23.500 22:36:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.500 22:36:23 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:23.500 22:36:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.500 22:36:23 -- common/autotest_common.sh@10 -- # set +x 00:16:23.500 Malloc0 00:16:23.500 22:36:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.500 22:36:23 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:23.500 22:36:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.500 22:36:23 -- common/autotest_common.sh@10 -- # set +x 00:16:23.500 22:36:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.500 22:36:24 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:23.500 22:36:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.500 22:36:24 -- common/autotest_common.sh@10 -- # set +x 00:16:23.500 22:36:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.500 22:36:24 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:23.501 22:36:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.501 22:36:24 -- common/autotest_common.sh@10 -- # set +x 00:16:23.501 [2024-11-20 22:36:24.020537] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:23.501 22:36:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.501 22:36:24 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:23.501 22:36:24 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:23.501 22:36:24 -- nvmf/common.sh@520 -- # config=() 00:16:23.501 22:36:24 -- nvmf/common.sh@520 -- # local subsystem config 00:16:23.501 22:36:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:23.501 22:36:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:23.501 { 00:16:23.501 "params": { 00:16:23.501 "name": "Nvme$subsystem", 00:16:23.501 "trtype": "$TEST_TRANSPORT", 00:16:23.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.501 "adrfam": "ipv4", 00:16:23.501 "trsvcid": "$NVMF_PORT", 00:16:23.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.501 "hdgst": ${hdgst:-false}, 00:16:23.501 "ddgst": ${ddgst:-false} 00:16:23.501 }, 00:16:23.501 "method": "bdev_nvme_attach_controller" 00:16:23.501 } 00:16:23.501 EOF 00:16:23.501 )") 00:16:23.501 22:36:24 -- nvmf/common.sh@542 -- # cat 00:16:23.501 22:36:24 -- nvmf/common.sh@544 -- # jq . 00:16:23.501 22:36:24 -- nvmf/common.sh@545 -- # IFS=, 00:16:23.501 22:36:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:23.501 "params": { 00:16:23.501 "name": "Nvme1", 00:16:23.501 "trtype": "tcp", 00:16:23.501 "traddr": "10.0.0.2", 00:16:23.501 "adrfam": "ipv4", 00:16:23.501 "trsvcid": "4420", 00:16:23.501 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:23.501 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:23.501 "hdgst": false, 00:16:23.501 "ddgst": false 00:16:23.501 }, 00:16:23.501 "method": "bdev_nvme_attach_controller" 00:16:23.501 }' 00:16:23.501 [2024-11-20 22:36:24.071142] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:23.501 [2024-11-20 22:36:24.071222] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid87965 ] 00:16:23.501 [2024-11-20 22:36:24.201561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:23.759 [2024-11-20 22:36:24.316340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.759 [2024-11-20 22:36:24.316203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.759 [2024-11-20 22:36:24.316339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.018 [2024-11-20 22:36:24.499968] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:24.018 [2024-11-20 22:36:24.500024] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:24.018 I/O targets: 00:16:24.018 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:24.018 00:16:24.018 00:16:24.018 CUnit - A unit testing framework for C - Version 2.1-3 00:16:24.018 http://cunit.sourceforge.net/ 00:16:24.018 00:16:24.018 00:16:24.018 Suite: bdevio tests on: Nvme1n1 00:16:24.018 Test: blockdev write read block ...passed 00:16:24.018 Test: blockdev write zeroes read block ...passed 00:16:24.018 Test: blockdev write zeroes read no split ...passed 00:16:24.018 Test: blockdev write zeroes read split ...passed 00:16:24.018 Test: blockdev write zeroes read split partial ...passed 00:16:24.018 Test: blockdev reset ...[2024-11-20 22:36:24.631581] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:24.018 [2024-11-20 22:36:24.631687] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fdd10 (9): Bad file descriptor 00:16:24.018 passed 00:16:24.018 Test: blockdev write read 8 blocks ...[2024-11-20 22:36:24.649022] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:24.018 passed 00:16:24.018 Test: blockdev write read size > 128k ...passed 00:16:24.018 Test: blockdev write read invalid size ...passed 00:16:24.018 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:24.018 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:24.018 Test: blockdev write read max offset ...passed 00:16:24.277 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:24.277 Test: blockdev writev readv 8 blocks ...passed 00:16:24.277 Test: blockdev writev readv 30 x 1block ...passed 00:16:24.277 Test: blockdev writev readv block ...passed 00:16:24.277 Test: blockdev writev readv size > 128k ...passed 00:16:24.277 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:24.277 Test: blockdev comparev and writev ...[2024-11-20 22:36:24.827765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:24.277 [2024-11-20 22:36:24.828104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.277 [2024-11-20 22:36:24.828147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:24.277 [2024-11-20 22:36:24.828159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:24.277 [2024-11-20 22:36:24.828569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:24.277 [2024-11-20 22:36:24.828586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:24.277 [2024-11-20 22:36:24.828600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:24.277 [2024-11-20 22:36:24.828609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:24.277 [2024-11-20 22:36:24.828942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:24.277 [2024-11-20 22:36:24.828955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:24.277 [2024-11-20 22:36:24.828969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:24.277 [2024-11-20 22:36:24.828978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:24.277 [2024-11-20 22:36:24.829323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:24.277 [2024-11-20 22:36:24.829339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:24.277 [2024-11-20 22:36:24.829353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:24.277 [2024-11-20 22:36:24.829363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:24.277 passed 00:16:24.277 Test: blockdev nvme passthru rw ...passed 00:16:24.277 Test: blockdev nvme passthru vendor specific ...passed 00:16:24.277 Test: blockdev nvme admin passthru ...[2024-11-20 22:36:24.913012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:24.277 [2024-11-20 22:36:24.913049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:24.277 [2024-11-20 22:36:24.913237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:24.277 [2024-11-20 22:36:24.913308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:24.277 [2024-11-20 22:36:24.913461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:24.277 [2024-11-20 22:36:24.913478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:24.277 [2024-11-20 22:36:24.913608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:24.277 [2024-11-20 22:36:24.913630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:24.277 passed 00:16:24.277 Test: blockdev copy ...passed 00:16:24.277 00:16:24.277 Run Summary: Type Total Ran Passed Failed Inactive 00:16:24.277 suites 1 1 n/a 0 0 00:16:24.277 tests 23 23 23 0 0 00:16:24.277 asserts 152 152 152 0 n/a 00:16:24.277 00:16:24.277 Elapsed time = 0.948 seconds 00:16:24.844 22:36:25 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.844 22:36:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.844 22:36:25 -- common/autotest_common.sh@10 -- # set +x 00:16:24.844 22:36:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.844 22:36:25 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:24.844 22:36:25 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:24.844 22:36:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:24.844 22:36:25 -- nvmf/common.sh@116 -- # sync 00:16:24.844 22:36:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:24.844 22:36:25 -- nvmf/common.sh@119 -- # set +e 00:16:24.844 22:36:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:24.844 22:36:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:24.844 rmmod nvme_tcp 00:16:24.844 rmmod nvme_fabrics 00:16:24.844 rmmod nvme_keyring 00:16:24.844 22:36:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:24.844 22:36:25 -- nvmf/common.sh@123 -- # set -e 00:16:24.844 22:36:25 -- nvmf/common.sh@124 -- # return 0 00:16:24.844 22:36:25 -- nvmf/common.sh@477 -- # '[' -n 87911 ']' 00:16:24.844 22:36:25 -- nvmf/common.sh@478 -- # killprocess 87911 00:16:24.844 22:36:25 -- common/autotest_common.sh@936 -- # '[' -z 87911 ']' 00:16:24.844 22:36:25 -- common/autotest_common.sh@940 -- # kill -0 87911 00:16:24.844 22:36:25 -- common/autotest_common.sh@941 -- # uname 00:16:24.844 22:36:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:24.844 22:36:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87911 00:16:24.844 killing process with pid 87911 00:16:24.844 22:36:25 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:24.844 22:36:25 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:24.844 22:36:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87911' 00:16:24.844 22:36:25 -- common/autotest_common.sh@955 -- # kill 87911 00:16:24.844 22:36:25 -- common/autotest_common.sh@960 -- # wait 87911 00:16:25.412 22:36:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:25.412 22:36:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:25.412 22:36:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:25.412 22:36:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:25.412 22:36:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:25.412 22:36:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.412 22:36:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.412 22:36:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.412 22:36:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:25.412 ************************************ 00:16:25.412 END TEST nvmf_bdevio_no_huge 00:16:25.412 ************************************ 00:16:25.412 00:16:25.412 real 0m3.508s 00:16:25.412 user 0m12.591s 00:16:25.412 sys 0m1.322s 00:16:25.412 22:36:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:25.412 22:36:25 -- common/autotest_common.sh@10 -- # set +x 00:16:25.412 22:36:25 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:25.412 22:36:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:25.412 22:36:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:25.412 22:36:25 -- common/autotest_common.sh@10 -- # set +x 00:16:25.412 ************************************ 00:16:25.412 START TEST nvmf_tls 00:16:25.412 ************************************ 00:16:25.412 22:36:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:25.412 * Looking for test storage... 00:16:25.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:25.412 22:36:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:25.412 22:36:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:25.412 22:36:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:25.671 22:36:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:25.671 22:36:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:25.671 22:36:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:25.671 22:36:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:25.671 22:36:26 -- scripts/common.sh@335 -- # IFS=.-: 00:16:25.671 22:36:26 -- scripts/common.sh@335 -- # read -ra ver1 00:16:25.671 22:36:26 -- scripts/common.sh@336 -- # IFS=.-: 00:16:25.671 22:36:26 -- scripts/common.sh@336 -- # read -ra ver2 00:16:25.671 22:36:26 -- scripts/common.sh@337 -- # local 'op=<' 00:16:25.671 22:36:26 -- scripts/common.sh@339 -- # ver1_l=2 00:16:25.671 22:36:26 -- scripts/common.sh@340 -- # ver2_l=1 00:16:25.671 22:36:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:25.671 22:36:26 -- scripts/common.sh@343 -- # case "$op" in 00:16:25.671 22:36:26 -- scripts/common.sh@344 -- # : 1 00:16:25.671 22:36:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:25.671 22:36:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:25.671 22:36:26 -- scripts/common.sh@364 -- # decimal 1 00:16:25.671 22:36:26 -- scripts/common.sh@352 -- # local d=1 00:16:25.671 22:36:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:25.671 22:36:26 -- scripts/common.sh@354 -- # echo 1 00:16:25.671 22:36:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:25.671 22:36:26 -- scripts/common.sh@365 -- # decimal 2 00:16:25.671 22:36:26 -- scripts/common.sh@352 -- # local d=2 00:16:25.671 22:36:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:25.671 22:36:26 -- scripts/common.sh@354 -- # echo 2 00:16:25.671 22:36:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:25.671 22:36:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:25.671 22:36:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:25.671 22:36:26 -- scripts/common.sh@367 -- # return 0 00:16:25.671 22:36:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:25.671 22:36:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:25.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.671 --rc genhtml_branch_coverage=1 00:16:25.671 --rc genhtml_function_coverage=1 00:16:25.671 --rc genhtml_legend=1 00:16:25.671 --rc geninfo_all_blocks=1 00:16:25.671 --rc geninfo_unexecuted_blocks=1 00:16:25.671 00:16:25.671 ' 00:16:25.671 22:36:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:25.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.671 --rc genhtml_branch_coverage=1 00:16:25.671 --rc genhtml_function_coverage=1 00:16:25.671 --rc genhtml_legend=1 00:16:25.671 --rc geninfo_all_blocks=1 00:16:25.671 --rc geninfo_unexecuted_blocks=1 00:16:25.671 00:16:25.671 ' 00:16:25.671 22:36:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:25.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.671 --rc genhtml_branch_coverage=1 00:16:25.671 --rc genhtml_function_coverage=1 00:16:25.671 --rc genhtml_legend=1 00:16:25.671 --rc geninfo_all_blocks=1 00:16:25.671 --rc geninfo_unexecuted_blocks=1 00:16:25.671 00:16:25.671 ' 00:16:25.671 22:36:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:25.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.671 --rc genhtml_branch_coverage=1 00:16:25.671 --rc genhtml_function_coverage=1 00:16:25.671 --rc genhtml_legend=1 00:16:25.671 --rc geninfo_all_blocks=1 00:16:25.671 --rc geninfo_unexecuted_blocks=1 00:16:25.671 00:16:25.671 ' 00:16:25.671 22:36:26 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:25.671 22:36:26 -- nvmf/common.sh@7 -- # uname -s 00:16:25.671 22:36:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.671 22:36:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.671 22:36:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.671 22:36:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.671 22:36:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.671 22:36:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.671 22:36:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.671 22:36:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.671 22:36:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.671 22:36:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.671 22:36:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:16:25.671 22:36:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:16:25.671 22:36:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.671 22:36:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.671 22:36:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:25.671 22:36:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:25.671 22:36:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.671 22:36:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.671 22:36:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.671 22:36:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.671 22:36:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.671 22:36:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.671 22:36:26 -- paths/export.sh@5 -- # export PATH 00:16:25.671 22:36:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.671 22:36:26 -- nvmf/common.sh@46 -- # : 0 00:16:25.671 22:36:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:25.671 22:36:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:25.671 22:36:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:25.671 22:36:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.672 22:36:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.672 22:36:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:25.672 22:36:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:25.672 22:36:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:25.672 22:36:26 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:25.672 22:36:26 -- target/tls.sh@71 -- # nvmftestinit 00:16:25.672 22:36:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:25.672 22:36:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.672 22:36:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:25.672 22:36:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:25.672 22:36:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:25.672 22:36:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.672 22:36:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.672 22:36:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.672 22:36:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:25.672 22:36:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:25.672 22:36:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:25.672 22:36:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:25.672 22:36:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:25.672 22:36:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:25.672 22:36:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.672 22:36:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:25.672 22:36:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:25.672 22:36:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:25.672 22:36:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:25.672 22:36:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:25.672 22:36:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:25.672 22:36:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.672 22:36:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:25.672 22:36:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:25.672 22:36:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:25.672 22:36:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:25.672 22:36:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:25.672 22:36:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:25.672 Cannot find device "nvmf_tgt_br" 00:16:25.672 22:36:26 -- nvmf/common.sh@154 -- # true 00:16:25.672 22:36:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:25.672 Cannot find device "nvmf_tgt_br2" 00:16:25.672 22:36:26 -- nvmf/common.sh@155 -- # true 00:16:25.672 22:36:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:25.672 22:36:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:25.672 Cannot find device "nvmf_tgt_br" 00:16:25.672 22:36:26 -- nvmf/common.sh@157 -- # true 00:16:25.672 22:36:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:25.672 Cannot find device "nvmf_tgt_br2" 00:16:25.672 22:36:26 -- nvmf/common.sh@158 -- # true 00:16:25.672 22:36:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:25.672 22:36:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:25.672 22:36:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:25.672 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:25.672 22:36:26 -- nvmf/common.sh@161 -- # true 00:16:25.672 22:36:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:25.672 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:25.672 22:36:26 -- nvmf/common.sh@162 -- # true 00:16:25.672 22:36:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:25.672 22:36:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:25.672 22:36:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:25.672 22:36:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:25.672 22:36:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:25.672 22:36:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:25.930 22:36:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:25.930 22:36:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:25.930 22:36:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:25.930 22:36:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:25.930 22:36:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:25.930 22:36:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:25.930 22:36:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:25.930 22:36:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:25.930 22:36:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:25.930 22:36:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:25.930 22:36:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:25.930 22:36:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:25.931 22:36:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:25.931 22:36:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:25.931 22:36:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:25.931 22:36:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:25.931 22:36:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:25.931 22:36:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:25.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:16:25.931 00:16:25.931 --- 10.0.0.2 ping statistics --- 00:16:25.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.931 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:16:25.931 22:36:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:25.931 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:25.931 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:16:25.931 00:16:25.931 --- 10.0.0.3 ping statistics --- 00:16:25.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.931 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:16:25.931 22:36:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:25.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:25.931 00:16:25.931 --- 10.0.0.1 ping statistics --- 00:16:25.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.931 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:25.931 22:36:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.931 22:36:26 -- nvmf/common.sh@421 -- # return 0 00:16:25.931 22:36:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:25.931 22:36:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.931 22:36:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:25.931 22:36:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:25.931 22:36:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.931 22:36:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:25.931 22:36:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:25.931 22:36:26 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:25.931 22:36:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:25.931 22:36:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:25.931 22:36:26 -- common/autotest_common.sh@10 -- # set +x 00:16:25.931 22:36:26 -- nvmf/common.sh@469 -- # nvmfpid=88157 00:16:25.931 22:36:26 -- nvmf/common.sh@470 -- # waitforlisten 88157 00:16:25.931 22:36:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:25.931 22:36:26 -- common/autotest_common.sh@829 -- # '[' -z 88157 ']' 00:16:25.931 22:36:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.931 22:36:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:25.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.931 22:36:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.931 22:36:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:25.931 22:36:26 -- common/autotest_common.sh@10 -- # set +x 00:16:25.931 [2024-11-20 22:36:26.616665] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:25.931 [2024-11-20 22:36:26.616728] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.189 [2024-11-20 22:36:26.750036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.189 [2024-11-20 22:36:26.835341] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:26.189 [2024-11-20 22:36:26.835872] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.189 [2024-11-20 22:36:26.836015] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.189 [2024-11-20 22:36:26.836193] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.189 [2024-11-20 22:36:26.836389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.189 22:36:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:26.189 22:36:26 -- common/autotest_common.sh@862 -- # return 0 00:16:26.189 22:36:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:26.189 22:36:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:26.189 22:36:26 -- common/autotest_common.sh@10 -- # set +x 00:16:26.448 22:36:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.448 22:36:26 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:26.448 22:36:26 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:26.707 true 00:16:26.707 22:36:27 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:26.707 22:36:27 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:26.965 22:36:27 -- target/tls.sh@82 -- # version=0 00:16:26.965 22:36:27 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:26.965 22:36:27 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:27.225 22:36:27 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:27.225 22:36:27 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:27.490 22:36:27 -- target/tls.sh@90 -- # version=13 00:16:27.490 22:36:27 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:27.490 22:36:27 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:27.786 22:36:28 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:27.786 22:36:28 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:27.786 22:36:28 -- target/tls.sh@98 -- # version=7 00:16:27.786 22:36:28 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:27.787 22:36:28 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:27.787 22:36:28 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:28.058 22:36:28 -- target/tls.sh@105 -- # ktls=false 00:16:28.058 22:36:28 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:28.058 22:36:28 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:28.316 22:36:29 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:28.316 22:36:29 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:28.585 22:36:29 -- target/tls.sh@113 -- # ktls=true 00:16:28.585 22:36:29 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:28.585 22:36:29 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:28.850 22:36:29 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:28.850 22:36:29 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:29.109 22:36:29 -- target/tls.sh@121 -- # ktls=false 00:16:29.109 22:36:29 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:29.109 22:36:29 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:29.109 22:36:29 -- target/tls.sh@49 -- # local key hash crc 00:16:29.109 22:36:29 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:29.109 22:36:29 -- target/tls.sh@51 -- # hash=01 00:16:29.109 22:36:29 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:29.109 22:36:29 -- target/tls.sh@52 -- # tail -c8 00:16:29.109 22:36:29 -- target/tls.sh@52 -- # gzip -1 -c 00:16:29.109 22:36:29 -- target/tls.sh@52 -- # head -c 4 00:16:29.109 22:36:29 -- target/tls.sh@52 -- # crc='p$H�' 00:16:29.109 22:36:29 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:29.109 22:36:29 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:29.109 22:36:29 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:29.109 22:36:29 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:29.109 22:36:29 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:29.109 22:36:29 -- target/tls.sh@49 -- # local key hash crc 00:16:29.109 22:36:29 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:29.109 22:36:29 -- target/tls.sh@51 -- # hash=01 00:16:29.109 22:36:29 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:29.109 22:36:29 -- target/tls.sh@52 -- # gzip -1 -c 00:16:29.109 22:36:29 -- target/tls.sh@52 -- # tail -c8 00:16:29.109 22:36:29 -- target/tls.sh@52 -- # head -c 4 00:16:29.109 22:36:29 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:29.109 22:36:29 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:29.109 22:36:29 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:29.109 22:36:29 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:29.109 22:36:29 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:29.109 22:36:29 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:29.109 22:36:29 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:29.109 22:36:29 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:29.109 22:36:29 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:29.109 22:36:29 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:29.109 22:36:29 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:29.109 22:36:29 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:29.367 22:36:30 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:29.934 22:36:30 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:29.934 22:36:30 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:29.934 22:36:30 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:29.934 [2024-11-20 22:36:30.630940] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:29.934 22:36:30 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:30.193 22:36:30 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:30.451 [2024-11-20 22:36:31.107005] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:30.451 [2024-11-20 22:36:31.107220] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.451 22:36:31 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:30.710 malloc0 00:16:30.710 22:36:31 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:30.970 22:36:31 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:31.228 22:36:31 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:41.205 Initializing NVMe Controllers 00:16:41.205 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:41.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:41.205 Initialization complete. Launching workers. 00:16:41.205 ======================================================== 00:16:41.205 Latency(us) 00:16:41.205 Device Information : IOPS MiB/s Average min max 00:16:41.205 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11593.27 45.29 5521.41 807.55 11678.24 00:16:41.205 ======================================================== 00:16:41.205 Total : 11593.27 45.29 5521.41 807.55 11678.24 00:16:41.205 00:16:41.205 22:36:41 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:41.205 22:36:41 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:41.205 22:36:41 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:41.205 22:36:41 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:41.205 22:36:41 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:41.205 22:36:41 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:41.205 22:36:41 -- target/tls.sh@28 -- # bdevperf_pid=88508 00:16:41.205 22:36:41 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:41.205 22:36:41 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:41.206 22:36:41 -- target/tls.sh@31 -- # waitforlisten 88508 /var/tmp/bdevperf.sock 00:16:41.206 22:36:41 -- common/autotest_common.sh@829 -- # '[' -z 88508 ']' 00:16:41.206 22:36:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:41.206 22:36:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:41.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:41.206 22:36:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:41.206 22:36:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:41.206 22:36:41 -- common/autotest_common.sh@10 -- # set +x 00:16:41.465 [2024-11-20 22:36:41.966738] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:41.465 [2024-11-20 22:36:41.966867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88508 ] 00:16:41.465 [2024-11-20 22:36:42.108859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.465 [2024-11-20 22:36:42.194643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:42.400 22:36:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:42.400 22:36:42 -- common/autotest_common.sh@862 -- # return 0 00:16:42.400 22:36:42 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:42.658 [2024-11-20 22:36:43.204791] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:42.658 TLSTESTn1 00:16:42.658 22:36:43 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:42.916 Running I/O for 10 seconds... 00:16:52.890 00:16:52.890 Latency(us) 00:16:52.890 [2024-11-20T22:36:53.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.890 [2024-11-20T22:36:53.624Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:52.890 Verification LBA range: start 0x0 length 0x2000 00:16:52.890 TLSTESTn1 : 10.02 5174.80 20.21 0.00 0.00 24693.52 4974.78 22997.18 00:16:52.890 [2024-11-20T22:36:53.624Z] =================================================================================================================== 00:16:52.890 [2024-11-20T22:36:53.624Z] Total : 5174.80 20.21 0.00 0.00 24693.52 4974.78 22997.18 00:16:52.890 0 00:16:52.890 22:36:53 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:52.890 22:36:53 -- target/tls.sh@45 -- # killprocess 88508 00:16:52.890 22:36:53 -- common/autotest_common.sh@936 -- # '[' -z 88508 ']' 00:16:52.890 22:36:53 -- common/autotest_common.sh@940 -- # kill -0 88508 00:16:52.890 22:36:53 -- common/autotest_common.sh@941 -- # uname 00:16:52.890 22:36:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:52.890 22:36:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88508 00:16:52.890 22:36:53 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:52.890 22:36:53 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:52.890 killing process with pid 88508 00:16:52.890 22:36:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88508' 00:16:52.890 22:36:53 -- common/autotest_common.sh@955 -- # kill 88508 00:16:52.890 Received shutdown signal, test time was about 10.000000 seconds 00:16:52.890 00:16:52.890 Latency(us) 00:16:52.890 [2024-11-20T22:36:53.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.890 [2024-11-20T22:36:53.624Z] =================================================================================================================== 00:16:52.890 [2024-11-20T22:36:53.624Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:52.890 22:36:53 -- common/autotest_common.sh@960 -- # wait 88508 00:16:53.149 22:36:53 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:53.149 22:36:53 -- common/autotest_common.sh@650 -- # local es=0 00:16:53.149 22:36:53 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:53.149 22:36:53 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:53.149 22:36:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:53.149 22:36:53 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:53.149 22:36:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:53.149 22:36:53 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:53.149 22:36:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:53.149 22:36:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:53.149 22:36:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:53.149 22:36:53 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:16:53.150 22:36:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:53.150 22:36:53 -- target/tls.sh@28 -- # bdevperf_pid=88661 00:16:53.150 22:36:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:53.150 22:36:53 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:53.150 22:36:53 -- target/tls.sh@31 -- # waitforlisten 88661 /var/tmp/bdevperf.sock 00:16:53.150 22:36:53 -- common/autotest_common.sh@829 -- # '[' -z 88661 ']' 00:16:53.150 22:36:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:53.150 22:36:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:53.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:53.150 22:36:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:53.150 22:36:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:53.150 22:36:53 -- common/autotest_common.sh@10 -- # set +x 00:16:53.150 [2024-11-20 22:36:53.747508] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:53.150 [2024-11-20 22:36:53.747605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88661 ] 00:16:53.408 [2024-11-20 22:36:53.886473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.408 [2024-11-20 22:36:53.941889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.345 22:36:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:54.345 22:36:54 -- common/autotest_common.sh@862 -- # return 0 00:16:54.345 22:36:54 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:54.345 [2024-11-20 22:36:54.942851] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:54.345 [2024-11-20 22:36:54.947793] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:54.345 [2024-11-20 22:36:54.948326] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc9a7c0 (107): Transport endpoint is not connected 00:16:54.345 [2024-11-20 22:36:54.949320] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc9a7c0 (9): Bad file descriptor 00:16:54.345 [2024-11-20 22:36:54.950328] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:54.345 [2024-11-20 22:36:54.950350] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:54.345 [2024-11-20 22:36:54.950360] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:54.345 2024/11/20 22:36:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:54.345 request: 00:16:54.345 { 00:16:54.345 "method": "bdev_nvme_attach_controller", 00:16:54.345 "params": { 00:16:54.345 "name": "TLSTEST", 00:16:54.345 "trtype": "tcp", 00:16:54.345 "traddr": "10.0.0.2", 00:16:54.345 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:54.345 "adrfam": "ipv4", 00:16:54.345 "trsvcid": "4420", 00:16:54.345 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:54.345 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:16:54.345 } 00:16:54.345 } 00:16:54.345 Got JSON-RPC error response 00:16:54.345 GoRPCClient: error on JSON-RPC call 00:16:54.345 22:36:54 -- target/tls.sh@36 -- # killprocess 88661 00:16:54.345 22:36:54 -- common/autotest_common.sh@936 -- # '[' -z 88661 ']' 00:16:54.345 22:36:54 -- common/autotest_common.sh@940 -- # kill -0 88661 00:16:54.345 22:36:54 -- common/autotest_common.sh@941 -- # uname 00:16:54.345 22:36:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:54.345 22:36:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88661 00:16:54.345 22:36:54 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:54.345 22:36:54 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:54.345 killing process with pid 88661 00:16:54.345 22:36:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88661' 00:16:54.345 22:36:54 -- common/autotest_common.sh@955 -- # kill 88661 00:16:54.346 22:36:54 -- common/autotest_common.sh@960 -- # wait 88661 00:16:54.346 Received shutdown signal, test time was about 10.000000 seconds 00:16:54.346 00:16:54.346 Latency(us) 00:16:54.346 [2024-11-20T22:36:55.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.346 [2024-11-20T22:36:55.080Z] =================================================================================================================== 00:16:54.346 [2024-11-20T22:36:55.080Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:54.605 22:36:55 -- target/tls.sh@37 -- # return 1 00:16:54.605 22:36:55 -- common/autotest_common.sh@653 -- # es=1 00:16:54.605 22:36:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:54.605 22:36:55 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:54.605 22:36:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:54.605 22:36:55 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:54.605 22:36:55 -- common/autotest_common.sh@650 -- # local es=0 00:16:54.605 22:36:55 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:54.605 22:36:55 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:54.605 22:36:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:54.605 22:36:55 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:54.605 22:36:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:54.605 22:36:55 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:54.605 22:36:55 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:54.605 22:36:55 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:54.605 22:36:55 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:54.605 22:36:55 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:54.605 22:36:55 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:54.605 22:36:55 -- target/tls.sh@28 -- # bdevperf_pid=88708 00:16:54.605 22:36:55 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:54.605 22:36:55 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:54.605 22:36:55 -- target/tls.sh@31 -- # waitforlisten 88708 /var/tmp/bdevperf.sock 00:16:54.605 22:36:55 -- common/autotest_common.sh@829 -- # '[' -z 88708 ']' 00:16:54.605 22:36:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:54.605 22:36:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:54.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:54.605 22:36:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:54.605 22:36:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:54.605 22:36:55 -- common/autotest_common.sh@10 -- # set +x 00:16:54.605 [2024-11-20 22:36:55.232013] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:54.605 [2024-11-20 22:36:55.232120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88708 ] 00:16:54.863 [2024-11-20 22:36:55.367563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.863 [2024-11-20 22:36:55.428993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.431 22:36:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:55.431 22:36:56 -- common/autotest_common.sh@862 -- # return 0 00:16:55.431 22:36:56 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:55.689 [2024-11-20 22:36:56.398425] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:55.689 [2024-11-20 22:36:56.405849] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:55.690 [2024-11-20 22:36:56.405906] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:55.690 [2024-11-20 22:36:56.405961] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:55.690 [2024-11-20 22:36:56.406877] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22427c0 (107): Transport endpoint is not connected 00:16:55.690 [2024-11-20 22:36:56.407863] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22427c0 (9): Bad file descriptor 00:16:55.690 [2024-11-20 22:36:56.408860] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:55.690 [2024-11-20 22:36:56.408884] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:55.690 [2024-11-20 22:36:56.408893] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:55.690 2024/11/20 22:36:56 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:55.690 request: 00:16:55.690 { 00:16:55.690 "method": "bdev_nvme_attach_controller", 00:16:55.690 "params": { 00:16:55.690 "name": "TLSTEST", 00:16:55.690 "trtype": "tcp", 00:16:55.690 "traddr": "10.0.0.2", 00:16:55.690 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:55.690 "adrfam": "ipv4", 00:16:55.690 "trsvcid": "4420", 00:16:55.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:55.690 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:16:55.690 } 00:16:55.690 } 00:16:55.690 Got JSON-RPC error response 00:16:55.690 GoRPCClient: error on JSON-RPC call 00:16:55.948 22:36:56 -- target/tls.sh@36 -- # killprocess 88708 00:16:55.948 22:36:56 -- common/autotest_common.sh@936 -- # '[' -z 88708 ']' 00:16:55.948 22:36:56 -- common/autotest_common.sh@940 -- # kill -0 88708 00:16:55.948 22:36:56 -- common/autotest_common.sh@941 -- # uname 00:16:55.948 22:36:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:55.949 22:36:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88708 00:16:55.949 22:36:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:55.949 22:36:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:55.949 killing process with pid 88708 00:16:55.949 22:36:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88708' 00:16:55.949 22:36:56 -- common/autotest_common.sh@955 -- # kill 88708 00:16:55.949 Received shutdown signal, test time was about 10.000000 seconds 00:16:55.949 00:16:55.949 Latency(us) 00:16:55.949 [2024-11-20T22:36:56.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.949 [2024-11-20T22:36:56.683Z] =================================================================================================================== 00:16:55.949 [2024-11-20T22:36:56.683Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:55.949 22:36:56 -- common/autotest_common.sh@960 -- # wait 88708 00:16:55.949 22:36:56 -- target/tls.sh@37 -- # return 1 00:16:55.949 22:36:56 -- common/autotest_common.sh@653 -- # es=1 00:16:55.949 22:36:56 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:55.949 22:36:56 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:55.949 22:36:56 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:55.949 22:36:56 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:55.949 22:36:56 -- common/autotest_common.sh@650 -- # local es=0 00:16:55.949 22:36:56 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:55.949 22:36:56 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:55.949 22:36:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:55.949 22:36:56 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:55.949 22:36:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:55.949 22:36:56 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:55.949 22:36:56 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:55.949 22:36:56 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:55.949 22:36:56 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:55.949 22:36:56 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:55.949 22:36:56 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:55.949 22:36:56 -- target/tls.sh@28 -- # bdevperf_pid=88748 00:16:55.949 22:36:56 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:55.949 22:36:56 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:55.949 22:36:56 -- target/tls.sh@31 -- # waitforlisten 88748 /var/tmp/bdevperf.sock 00:16:55.949 22:36:56 -- common/autotest_common.sh@829 -- # '[' -z 88748 ']' 00:16:55.949 22:36:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:55.949 22:36:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:55.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:55.949 22:36:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:55.949 22:36:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:55.949 22:36:56 -- common/autotest_common.sh@10 -- # set +x 00:16:56.208 [2024-11-20 22:36:56.692028] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:56.208 [2024-11-20 22:36:56.692130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88748 ] 00:16:56.208 [2024-11-20 22:36:56.828312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.208 [2024-11-20 22:36:56.893736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.143 22:36:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.143 22:36:57 -- common/autotest_common.sh@862 -- # return 0 00:16:57.143 22:36:57 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:57.401 [2024-11-20 22:36:57.917951] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:57.401 [2024-11-20 22:36:57.925923] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:57.401 [2024-11-20 22:36:57.925960] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:57.401 [2024-11-20 22:36:57.926009] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:57.401 [2024-11-20 22:36:57.926519] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d297c0 (107): Transport endpoint is not connected 00:16:57.401 [2024-11-20 22:36:57.927500] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d297c0 (9): Bad file descriptor 00:16:57.401 [2024-11-20 22:36:57.928496] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:57.401 [2024-11-20 22:36:57.928529] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:57.401 [2024-11-20 22:36:57.928539] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:57.401 2024/11/20 22:36:57 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:57.401 request: 00:16:57.401 { 00:16:57.401 "method": "bdev_nvme_attach_controller", 00:16:57.401 "params": { 00:16:57.401 "name": "TLSTEST", 00:16:57.401 "trtype": "tcp", 00:16:57.401 "traddr": "10.0.0.2", 00:16:57.401 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:57.401 "adrfam": "ipv4", 00:16:57.401 "trsvcid": "4420", 00:16:57.401 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:57.401 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:16:57.401 } 00:16:57.401 } 00:16:57.401 Got JSON-RPC error response 00:16:57.401 GoRPCClient: error on JSON-RPC call 00:16:57.401 22:36:57 -- target/tls.sh@36 -- # killprocess 88748 00:16:57.401 22:36:57 -- common/autotest_common.sh@936 -- # '[' -z 88748 ']' 00:16:57.401 22:36:57 -- common/autotest_common.sh@940 -- # kill -0 88748 00:16:57.401 22:36:57 -- common/autotest_common.sh@941 -- # uname 00:16:57.401 22:36:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:57.401 22:36:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88748 00:16:57.401 killing process with pid 88748 00:16:57.401 Received shutdown signal, test time was about 10.000000 seconds 00:16:57.401 00:16:57.401 Latency(us) 00:16:57.401 [2024-11-20T22:36:58.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.401 [2024-11-20T22:36:58.135Z] =================================================================================================================== 00:16:57.401 [2024-11-20T22:36:58.135Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:57.401 22:36:57 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:57.401 22:36:57 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:57.401 22:36:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88748' 00:16:57.401 22:36:57 -- common/autotest_common.sh@955 -- # kill 88748 00:16:57.401 22:36:57 -- common/autotest_common.sh@960 -- # wait 88748 00:16:57.660 22:36:58 -- target/tls.sh@37 -- # return 1 00:16:57.660 22:36:58 -- common/autotest_common.sh@653 -- # es=1 00:16:57.660 22:36:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:57.660 22:36:58 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:57.660 22:36:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:57.660 22:36:58 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:57.660 22:36:58 -- common/autotest_common.sh@650 -- # local es=0 00:16:57.660 22:36:58 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:57.660 22:36:58 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:57.660 22:36:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.660 22:36:58 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:57.660 22:36:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.660 22:36:58 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:57.660 22:36:58 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:57.660 22:36:58 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:57.660 22:36:58 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:57.660 22:36:58 -- target/tls.sh@23 -- # psk= 00:16:57.660 22:36:58 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:57.660 22:36:58 -- target/tls.sh@28 -- # bdevperf_pid=88794 00:16:57.660 22:36:58 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:57.660 22:36:58 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:57.660 22:36:58 -- target/tls.sh@31 -- # waitforlisten 88794 /var/tmp/bdevperf.sock 00:16:57.660 22:36:58 -- common/autotest_common.sh@829 -- # '[' -z 88794 ']' 00:16:57.661 22:36:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:57.661 22:36:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.661 22:36:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:57.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:57.661 22:36:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.661 22:36:58 -- common/autotest_common.sh@10 -- # set +x 00:16:57.661 [2024-11-20 22:36:58.214637] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:57.661 [2024-11-20 22:36:58.214921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88794 ] 00:16:57.661 [2024-11-20 22:36:58.354259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.919 [2024-11-20 22:36:58.415071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.487 22:36:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.487 22:36:59 -- common/autotest_common.sh@862 -- # return 0 00:16:58.487 22:36:59 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:58.746 [2024-11-20 22:36:59.376097] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:58.746 [2024-11-20 22:36:59.377800] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc5090 (9): Bad file descriptor 00:16:58.746 [2024-11-20 22:36:59.378793] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:58.746 [2024-11-20 22:36:59.379251] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:58.746 [2024-11-20 22:36:59.379506] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:58.746 2024/11/20 22:36:59 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:58.746 request: 00:16:58.746 { 00:16:58.746 "method": "bdev_nvme_attach_controller", 00:16:58.746 "params": { 00:16:58.746 "name": "TLSTEST", 00:16:58.746 "trtype": "tcp", 00:16:58.746 "traddr": "10.0.0.2", 00:16:58.746 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:58.746 "adrfam": "ipv4", 00:16:58.746 "trsvcid": "4420", 00:16:58.746 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:16:58.746 } 00:16:58.746 } 00:16:58.746 Got JSON-RPC error response 00:16:58.746 GoRPCClient: error on JSON-RPC call 00:16:58.746 22:36:59 -- target/tls.sh@36 -- # killprocess 88794 00:16:58.746 22:36:59 -- common/autotest_common.sh@936 -- # '[' -z 88794 ']' 00:16:58.746 22:36:59 -- common/autotest_common.sh@940 -- # kill -0 88794 00:16:58.746 22:36:59 -- common/autotest_common.sh@941 -- # uname 00:16:58.746 22:36:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:58.746 22:36:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88794 00:16:58.746 22:36:59 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:58.746 killing process with pid 88794 00:16:58.746 22:36:59 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:58.746 22:36:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88794' 00:16:58.746 22:36:59 -- common/autotest_common.sh@955 -- # kill 88794 00:16:58.746 Received shutdown signal, test time was about 10.000000 seconds 00:16:58.746 00:16:58.746 Latency(us) 00:16:58.746 [2024-11-20T22:36:59.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.746 [2024-11-20T22:36:59.480Z] =================================================================================================================== 00:16:58.746 [2024-11-20T22:36:59.480Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:58.746 22:36:59 -- common/autotest_common.sh@960 -- # wait 88794 00:16:59.004 22:36:59 -- target/tls.sh@37 -- # return 1 00:16:59.004 22:36:59 -- common/autotest_common.sh@653 -- # es=1 00:16:59.004 22:36:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:59.004 22:36:59 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:59.004 22:36:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:59.004 22:36:59 -- target/tls.sh@167 -- # killprocess 88157 00:16:59.004 22:36:59 -- common/autotest_common.sh@936 -- # '[' -z 88157 ']' 00:16:59.004 22:36:59 -- common/autotest_common.sh@940 -- # kill -0 88157 00:16:59.005 22:36:59 -- common/autotest_common.sh@941 -- # uname 00:16:59.005 22:36:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:59.005 22:36:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88157 00:16:59.005 killing process with pid 88157 00:16:59.005 22:36:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:59.005 22:36:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:59.005 22:36:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88157' 00:16:59.005 22:36:59 -- common/autotest_common.sh@955 -- # kill 88157 00:16:59.005 22:36:59 -- common/autotest_common.sh@960 -- # wait 88157 00:16:59.262 22:36:59 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:16:59.262 22:36:59 -- target/tls.sh@49 -- # local key hash crc 00:16:59.262 22:36:59 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:59.262 22:36:59 -- target/tls.sh@51 -- # hash=02 00:16:59.262 22:36:59 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:16:59.262 22:36:59 -- target/tls.sh@52 -- # gzip -1 -c 00:16:59.262 22:36:59 -- target/tls.sh@52 -- # head -c 4 00:16:59.262 22:36:59 -- target/tls.sh@52 -- # tail -c8 00:16:59.262 22:36:59 -- target/tls.sh@52 -- # crc='�e�'\''' 00:16:59.262 22:36:59 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:59.262 22:36:59 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:16:59.262 22:36:59 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:59.262 22:36:59 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:59.262 22:36:59 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:59.262 22:36:59 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:59.262 22:36:59 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:59.262 22:36:59 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:16:59.262 22:36:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:59.262 22:36:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:59.262 22:36:59 -- common/autotest_common.sh@10 -- # set +x 00:16:59.262 22:36:59 -- nvmf/common.sh@469 -- # nvmfpid=88855 00:16:59.262 22:36:59 -- nvmf/common.sh@470 -- # waitforlisten 88855 00:16:59.262 22:36:59 -- common/autotest_common.sh@829 -- # '[' -z 88855 ']' 00:16:59.262 22:36:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:59.262 22:36:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.262 22:36:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:59.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.262 22:36:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.262 22:36:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:59.262 22:36:59 -- common/autotest_common.sh@10 -- # set +x 00:16:59.262 [2024-11-20 22:36:59.989176] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:59.262 [2024-11-20 22:36:59.989269] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.520 [2024-11-20 22:37:00.124262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.520 [2024-11-20 22:37:00.188856] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:59.520 [2024-11-20 22:37:00.189010] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.520 [2024-11-20 22:37:00.189025] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.520 [2024-11-20 22:37:00.189035] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.520 [2024-11-20 22:37:00.189062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.454 22:37:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:00.454 22:37:00 -- common/autotest_common.sh@862 -- # return 0 00:17:00.454 22:37:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:00.454 22:37:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:00.454 22:37:00 -- common/autotest_common.sh@10 -- # set +x 00:17:00.454 22:37:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.454 22:37:00 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:00.454 22:37:00 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:00.454 22:37:00 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:00.711 [2024-11-20 22:37:01.243125] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:00.711 22:37:01 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:00.969 22:37:01 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:00.969 [2024-11-20 22:37:01.647190] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:00.969 [2024-11-20 22:37:01.647466] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.969 22:37:01 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:01.228 malloc0 00:17:01.228 22:37:01 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:01.486 22:37:02 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:01.745 22:37:02 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:01.745 22:37:02 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:01.745 22:37:02 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:01.745 22:37:02 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:01.745 22:37:02 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:01.745 22:37:02 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:01.745 22:37:02 -- target/tls.sh@28 -- # bdevperf_pid=88957 00:17:01.745 22:37:02 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:01.745 22:37:02 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:01.745 22:37:02 -- target/tls.sh@31 -- # waitforlisten 88957 /var/tmp/bdevperf.sock 00:17:01.745 22:37:02 -- common/autotest_common.sh@829 -- # '[' -z 88957 ']' 00:17:01.745 22:37:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:01.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:01.745 22:37:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:01.745 22:37:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:01.745 22:37:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:01.745 22:37:02 -- common/autotest_common.sh@10 -- # set +x 00:17:01.745 [2024-11-20 22:37:02.458671] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:01.745 [2024-11-20 22:37:02.458753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88957 ] 00:17:02.004 [2024-11-20 22:37:02.593148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.005 [2024-11-20 22:37:02.658026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.940 22:37:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:02.940 22:37:03 -- common/autotest_common.sh@862 -- # return 0 00:17:02.940 22:37:03 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:02.940 [2024-11-20 22:37:03.553365] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:02.940 TLSTESTn1 00:17:02.940 22:37:03 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:03.199 Running I/O for 10 seconds... 00:17:13.244 00:17:13.244 Latency(us) 00:17:13.244 [2024-11-20T22:37:13.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.244 [2024-11-20T22:37:13.978Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:13.244 Verification LBA range: start 0x0 length 0x2000 00:17:13.244 TLSTESTn1 : 10.02 5175.70 20.22 0.00 0.00 24689.66 4974.78 23592.96 00:17:13.244 [2024-11-20T22:37:13.978Z] =================================================================================================================== 00:17:13.244 [2024-11-20T22:37:13.978Z] Total : 5175.70 20.22 0.00 0.00 24689.66 4974.78 23592.96 00:17:13.244 0 00:17:13.244 22:37:13 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:13.244 22:37:13 -- target/tls.sh@45 -- # killprocess 88957 00:17:13.244 22:37:13 -- common/autotest_common.sh@936 -- # '[' -z 88957 ']' 00:17:13.244 22:37:13 -- common/autotest_common.sh@940 -- # kill -0 88957 00:17:13.244 22:37:13 -- common/autotest_common.sh@941 -- # uname 00:17:13.244 22:37:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:13.244 22:37:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88957 00:17:13.244 killing process with pid 88957 00:17:13.244 Received shutdown signal, test time was about 10.000000 seconds 00:17:13.244 00:17:13.244 Latency(us) 00:17:13.244 [2024-11-20T22:37:13.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.244 [2024-11-20T22:37:13.978Z] =================================================================================================================== 00:17:13.244 [2024-11-20T22:37:13.978Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:13.244 22:37:13 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:13.244 22:37:13 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:13.244 22:37:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88957' 00:17:13.244 22:37:13 -- common/autotest_common.sh@955 -- # kill 88957 00:17:13.244 22:37:13 -- common/autotest_common.sh@960 -- # wait 88957 00:17:13.504 22:37:13 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:13.504 22:37:13 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:13.504 22:37:13 -- common/autotest_common.sh@650 -- # local es=0 00:17:13.504 22:37:13 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:13.504 22:37:13 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:13.504 22:37:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.504 22:37:13 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:13.504 22:37:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.504 22:37:13 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:13.504 22:37:13 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:13.504 22:37:13 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:13.504 22:37:14 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:13.504 22:37:14 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:13.504 22:37:14 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:13.504 22:37:14 -- target/tls.sh@28 -- # bdevperf_pid=89105 00:17:13.504 22:37:14 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:13.504 22:37:14 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:13.504 22:37:14 -- target/tls.sh@31 -- # waitforlisten 89105 /var/tmp/bdevperf.sock 00:17:13.504 22:37:14 -- common/autotest_common.sh@829 -- # '[' -z 89105 ']' 00:17:13.504 22:37:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:13.504 22:37:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.504 22:37:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:13.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:13.504 22:37:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.504 22:37:14 -- common/autotest_common.sh@10 -- # set +x 00:17:13.504 [2024-11-20 22:37:14.054655] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:13.504 [2024-11-20 22:37:14.054930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89105 ] 00:17:13.504 [2024-11-20 22:37:14.193687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.764 [2024-11-20 22:37:14.261238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.332 22:37:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:14.332 22:37:15 -- common/autotest_common.sh@862 -- # return 0 00:17:14.332 22:37:15 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:14.591 [2024-11-20 22:37:15.256705] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:14.591 [2024-11-20 22:37:15.256763] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:14.591 2024/11/20 22:37:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:14.591 request: 00:17:14.591 { 00:17:14.591 "method": "bdev_nvme_attach_controller", 00:17:14.591 "params": { 00:17:14.591 "name": "TLSTEST", 00:17:14.591 "trtype": "tcp", 00:17:14.591 "traddr": "10.0.0.2", 00:17:14.591 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:14.591 "adrfam": "ipv4", 00:17:14.591 "trsvcid": "4420", 00:17:14.591 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:14.591 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:14.591 } 00:17:14.591 } 00:17:14.591 Got JSON-RPC error response 00:17:14.591 GoRPCClient: error on JSON-RPC call 00:17:14.591 22:37:15 -- target/tls.sh@36 -- # killprocess 89105 00:17:14.591 22:37:15 -- common/autotest_common.sh@936 -- # '[' -z 89105 ']' 00:17:14.591 22:37:15 -- common/autotest_common.sh@940 -- # kill -0 89105 00:17:14.591 22:37:15 -- common/autotest_common.sh@941 -- # uname 00:17:14.591 22:37:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:14.591 22:37:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89105 00:17:14.591 killing process with pid 89105 00:17:14.591 Received shutdown signal, test time was about 10.000000 seconds 00:17:14.591 00:17:14.591 Latency(us) 00:17:14.591 [2024-11-20T22:37:15.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.591 [2024-11-20T22:37:15.325Z] =================================================================================================================== 00:17:14.591 [2024-11-20T22:37:15.325Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:14.591 22:37:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:14.591 22:37:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:14.591 22:37:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89105' 00:17:14.591 22:37:15 -- common/autotest_common.sh@955 -- # kill 89105 00:17:14.591 22:37:15 -- common/autotest_common.sh@960 -- # wait 89105 00:17:14.850 22:37:15 -- target/tls.sh@37 -- # return 1 00:17:14.850 22:37:15 -- common/autotest_common.sh@653 -- # es=1 00:17:14.850 22:37:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:14.850 22:37:15 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:14.850 22:37:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:14.850 22:37:15 -- target/tls.sh@183 -- # killprocess 88855 00:17:14.850 22:37:15 -- common/autotest_common.sh@936 -- # '[' -z 88855 ']' 00:17:14.850 22:37:15 -- common/autotest_common.sh@940 -- # kill -0 88855 00:17:14.850 22:37:15 -- common/autotest_common.sh@941 -- # uname 00:17:14.850 22:37:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:14.850 22:37:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88855 00:17:14.850 killing process with pid 88855 00:17:14.850 22:37:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:14.850 22:37:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:14.850 22:37:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88855' 00:17:14.850 22:37:15 -- common/autotest_common.sh@955 -- # kill 88855 00:17:14.850 22:37:15 -- common/autotest_common.sh@960 -- # wait 88855 00:17:15.109 22:37:15 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:15.109 22:37:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:15.109 22:37:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:15.109 22:37:15 -- common/autotest_common.sh@10 -- # set +x 00:17:15.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.109 22:37:15 -- nvmf/common.sh@469 -- # nvmfpid=89161 00:17:15.109 22:37:15 -- nvmf/common.sh@470 -- # waitforlisten 89161 00:17:15.109 22:37:15 -- common/autotest_common.sh@829 -- # '[' -z 89161 ']' 00:17:15.109 22:37:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.109 22:37:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:15.109 22:37:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.109 22:37:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:15.109 22:37:15 -- common/autotest_common.sh@10 -- # set +x 00:17:15.109 22:37:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:15.109 [2024-11-20 22:37:15.834889] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:15.109 [2024-11-20 22:37:15.835742] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.368 [2024-11-20 22:37:15.974661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.368 [2024-11-20 22:37:16.035360] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:15.368 [2024-11-20 22:37:16.035512] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.368 [2024-11-20 22:37:16.035525] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.368 [2024-11-20 22:37:16.035534] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.368 [2024-11-20 22:37:16.035564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.302 22:37:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:16.302 22:37:16 -- common/autotest_common.sh@862 -- # return 0 00:17:16.302 22:37:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:16.302 22:37:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:16.302 22:37:16 -- common/autotest_common.sh@10 -- # set +x 00:17:16.302 22:37:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.302 22:37:16 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:16.302 22:37:16 -- common/autotest_common.sh@650 -- # local es=0 00:17:16.302 22:37:16 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:16.302 22:37:16 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:17:16.302 22:37:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.302 22:37:16 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:17:16.302 22:37:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.302 22:37:16 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:16.302 22:37:16 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:16.302 22:37:16 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:16.302 [2024-11-20 22:37:16.998852] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.302 22:37:17 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:16.561 22:37:17 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:16.820 [2024-11-20 22:37:17.462920] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:16.820 [2024-11-20 22:37:17.463148] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.820 22:37:17 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:17.079 malloc0 00:17:17.079 22:37:17 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:17.337 22:37:17 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:17.596 [2024-11-20 22:37:18.149010] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:17.596 [2024-11-20 22:37:18.149043] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:17.596 [2024-11-20 22:37:18.149061] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:17.596 2024/11/20 22:37:18 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:17:17.596 request: 00:17:17.596 { 00:17:17.596 "method": "nvmf_subsystem_add_host", 00:17:17.596 "params": { 00:17:17.596 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.596 "host": "nqn.2016-06.io.spdk:host1", 00:17:17.596 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:17.596 } 00:17:17.596 } 00:17:17.596 Got JSON-RPC error response 00:17:17.596 GoRPCClient: error on JSON-RPC call 00:17:17.596 22:37:18 -- common/autotest_common.sh@653 -- # es=1 00:17:17.596 22:37:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:17.596 22:37:18 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:17.596 22:37:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:17.596 22:37:18 -- target/tls.sh@189 -- # killprocess 89161 00:17:17.596 22:37:18 -- common/autotest_common.sh@936 -- # '[' -z 89161 ']' 00:17:17.596 22:37:18 -- common/autotest_common.sh@940 -- # kill -0 89161 00:17:17.596 22:37:18 -- common/autotest_common.sh@941 -- # uname 00:17:17.596 22:37:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:17.596 22:37:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89161 00:17:17.596 22:37:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:17.596 22:37:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:17.596 killing process with pid 89161 00:17:17.596 22:37:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89161' 00:17:17.596 22:37:18 -- common/autotest_common.sh@955 -- # kill 89161 00:17:17.596 22:37:18 -- common/autotest_common.sh@960 -- # wait 89161 00:17:17.855 22:37:18 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:17.855 22:37:18 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:17.855 22:37:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:17.855 22:37:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:17.855 22:37:18 -- common/autotest_common.sh@10 -- # set +x 00:17:17.855 22:37:18 -- nvmf/common.sh@469 -- # nvmfpid=89272 00:17:17.855 22:37:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:17.855 22:37:18 -- nvmf/common.sh@470 -- # waitforlisten 89272 00:17:17.855 22:37:18 -- common/autotest_common.sh@829 -- # '[' -z 89272 ']' 00:17:17.855 22:37:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.855 22:37:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:17.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.855 22:37:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.855 22:37:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:17.855 22:37:18 -- common/autotest_common.sh@10 -- # set +x 00:17:17.855 [2024-11-20 22:37:18.530013] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:17.855 [2024-11-20 22:37:18.530102] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.114 [2024-11-20 22:37:18.666376] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.114 [2024-11-20 22:37:18.723414] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:18.114 [2024-11-20 22:37:18.723571] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.114 [2024-11-20 22:37:18.723585] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.114 [2024-11-20 22:37:18.723593] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.114 [2024-11-20 22:37:18.723630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.048 22:37:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:19.048 22:37:19 -- common/autotest_common.sh@862 -- # return 0 00:17:19.048 22:37:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:19.048 22:37:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:19.048 22:37:19 -- common/autotest_common.sh@10 -- # set +x 00:17:19.048 22:37:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.048 22:37:19 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:19.048 22:37:19 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:19.048 22:37:19 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:19.048 [2024-11-20 22:37:19.762692] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.306 22:37:19 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:19.306 22:37:19 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:19.565 [2024-11-20 22:37:20.154823] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:19.565 [2024-11-20 22:37:20.155039] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.565 22:37:20 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:19.824 malloc0 00:17:19.824 22:37:20 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:20.083 22:37:20 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:20.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:20.343 22:37:20 -- target/tls.sh@197 -- # bdevperf_pid=89369 00:17:20.343 22:37:20 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:20.343 22:37:20 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:20.343 22:37:20 -- target/tls.sh@200 -- # waitforlisten 89369 /var/tmp/bdevperf.sock 00:17:20.343 22:37:20 -- common/autotest_common.sh@829 -- # '[' -z 89369 ']' 00:17:20.343 22:37:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:20.343 22:37:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.343 22:37:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:20.343 22:37:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.343 22:37:20 -- common/autotest_common.sh@10 -- # set +x 00:17:20.343 [2024-11-20 22:37:20.897058] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:20.343 [2024-11-20 22:37:20.897758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89369 ] 00:17:20.343 [2024-11-20 22:37:21.036073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.602 [2024-11-20 22:37:21.103622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.169 22:37:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.169 22:37:21 -- common/autotest_common.sh@862 -- # return 0 00:17:21.169 22:37:21 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:21.428 [2024-11-20 22:37:22.058080] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:21.428 TLSTESTn1 00:17:21.428 22:37:22 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:21.688 22:37:22 -- target/tls.sh@205 -- # tgtconf='{ 00:17:21.688 "subsystems": [ 00:17:21.688 { 00:17:21.688 "subsystem": "iobuf", 00:17:21.688 "config": [ 00:17:21.688 { 00:17:21.688 "method": "iobuf_set_options", 00:17:21.688 "params": { 00:17:21.688 "large_bufsize": 135168, 00:17:21.688 "large_pool_count": 1024, 00:17:21.688 "small_bufsize": 8192, 00:17:21.688 "small_pool_count": 8192 00:17:21.688 } 00:17:21.688 } 00:17:21.688 ] 00:17:21.688 }, 00:17:21.688 { 00:17:21.688 "subsystem": "sock", 00:17:21.688 "config": [ 00:17:21.688 { 00:17:21.688 "method": "sock_impl_set_options", 00:17:21.688 "params": { 00:17:21.688 "enable_ktls": false, 00:17:21.688 "enable_placement_id": 0, 00:17:21.688 "enable_quickack": false, 00:17:21.688 "enable_recv_pipe": true, 00:17:21.688 "enable_zerocopy_send_client": false, 00:17:21.688 "enable_zerocopy_send_server": true, 00:17:21.688 "impl_name": "posix", 00:17:21.688 "recv_buf_size": 2097152, 00:17:21.688 "send_buf_size": 2097152, 00:17:21.688 "tls_version": 0, 00:17:21.688 "zerocopy_threshold": 0 00:17:21.688 } 00:17:21.688 }, 00:17:21.688 { 00:17:21.688 "method": "sock_impl_set_options", 00:17:21.688 "params": { 00:17:21.688 "enable_ktls": false, 00:17:21.688 "enable_placement_id": 0, 00:17:21.688 "enable_quickack": false, 00:17:21.688 "enable_recv_pipe": true, 00:17:21.688 "enable_zerocopy_send_client": false, 00:17:21.688 "enable_zerocopy_send_server": true, 00:17:21.688 "impl_name": "ssl", 00:17:21.688 "recv_buf_size": 4096, 00:17:21.688 "send_buf_size": 4096, 00:17:21.688 "tls_version": 0, 00:17:21.688 "zerocopy_threshold": 0 00:17:21.688 } 00:17:21.688 } 00:17:21.688 ] 00:17:21.688 }, 00:17:21.688 { 00:17:21.688 "subsystem": "vmd", 00:17:21.688 "config": [] 00:17:21.688 }, 00:17:21.688 { 00:17:21.688 "subsystem": "accel", 00:17:21.688 "config": [ 00:17:21.688 { 00:17:21.688 "method": "accel_set_options", 00:17:21.688 "params": { 00:17:21.688 "buf_count": 2048, 00:17:21.688 "large_cache_size": 16, 00:17:21.688 "sequence_count": 2048, 00:17:21.688 "small_cache_size": 128, 00:17:21.688 "task_count": 2048 00:17:21.688 } 00:17:21.688 } 00:17:21.688 ] 00:17:21.688 }, 00:17:21.688 { 00:17:21.688 "subsystem": "bdev", 00:17:21.688 "config": [ 00:17:21.688 { 00:17:21.688 "method": "bdev_set_options", 00:17:21.688 "params": { 00:17:21.688 "bdev_auto_examine": true, 00:17:21.688 "bdev_io_cache_size": 256, 00:17:21.688 "bdev_io_pool_size": 65535, 00:17:21.688 "iobuf_large_cache_size": 16, 00:17:21.688 "iobuf_small_cache_size": 128 00:17:21.688 } 00:17:21.688 }, 00:17:21.688 { 00:17:21.688 "method": "bdev_raid_set_options", 00:17:21.688 "params": { 00:17:21.688 "process_window_size_kb": 1024 00:17:21.688 } 00:17:21.688 }, 00:17:21.688 { 00:17:21.688 "method": "bdev_iscsi_set_options", 00:17:21.688 "params": { 00:17:21.688 "timeout_sec": 30 00:17:21.688 } 00:17:21.688 }, 00:17:21.688 { 00:17:21.688 "method": "bdev_nvme_set_options", 00:17:21.688 "params": { 00:17:21.688 "action_on_timeout": "none", 00:17:21.688 "allow_accel_sequence": false, 00:17:21.688 "arbitration_burst": 0, 00:17:21.688 "bdev_retry_count": 3, 00:17:21.688 "ctrlr_loss_timeout_sec": 0, 00:17:21.688 "delay_cmd_submit": true, 00:17:21.688 "fast_io_fail_timeout_sec": 0, 00:17:21.688 "generate_uuids": false, 00:17:21.688 "high_priority_weight": 0, 00:17:21.688 "io_path_stat": false, 00:17:21.688 "io_queue_requests": 0, 00:17:21.688 "keep_alive_timeout_ms": 10000, 00:17:21.688 "low_priority_weight": 0, 00:17:21.688 "medium_priority_weight": 0, 00:17:21.688 "nvme_adminq_poll_period_us": 10000, 00:17:21.688 "nvme_ioq_poll_period_us": 0, 00:17:21.688 "reconnect_delay_sec": 0, 00:17:21.688 "timeout_admin_us": 0, 00:17:21.688 "timeout_us": 0, 00:17:21.688 "transport_ack_timeout": 0, 00:17:21.688 "transport_retry_count": 4, 00:17:21.688 "transport_tos": 0 00:17:21.688 } 00:17:21.688 }, 00:17:21.688 { 00:17:21.688 "method": "bdev_nvme_set_hotplug", 00:17:21.688 "params": { 00:17:21.688 "enable": false, 00:17:21.688 "period_us": 100000 00:17:21.688 } 00:17:21.688 }, 00:17:21.688 { 00:17:21.688 "method": "bdev_malloc_create", 00:17:21.688 "params": { 00:17:21.688 "block_size": 4096, 00:17:21.688 "name": "malloc0", 00:17:21.688 "num_blocks": 8192, 00:17:21.688 "optimal_io_boundary": 0, 00:17:21.688 "physical_block_size": 4096, 00:17:21.688 "uuid": "dc7e0abc-ff82-49e3-a8bf-0f42d04c7da8" 00:17:21.688 } 00:17:21.688 }, 00:17:21.688 { 00:17:21.688 "method": "bdev_wait_for_examine" 00:17:21.688 } 00:17:21.688 ] 00:17:21.688 }, 00:17:21.688 { 00:17:21.688 "subsystem": "nbd", 00:17:21.688 "config": [] 00:17:21.688 }, 00:17:21.688 { 00:17:21.688 "subsystem": "scheduler", 00:17:21.689 "config": [ 00:17:21.689 { 00:17:21.689 "method": "framework_set_scheduler", 00:17:21.689 "params": { 00:17:21.689 "name": "static" 00:17:21.689 } 00:17:21.689 } 00:17:21.689 ] 00:17:21.689 }, 00:17:21.689 { 00:17:21.689 "subsystem": "nvmf", 00:17:21.689 "config": [ 00:17:21.689 { 00:17:21.689 "method": "nvmf_set_config", 00:17:21.689 "params": { 00:17:21.689 "admin_cmd_passthru": { 00:17:21.689 "identify_ctrlr": false 00:17:21.689 }, 00:17:21.689 "discovery_filter": "match_any" 00:17:21.689 } 00:17:21.689 }, 00:17:21.689 { 00:17:21.689 "method": "nvmf_set_max_subsystems", 00:17:21.689 "params": { 00:17:21.689 "max_subsystems": 1024 00:17:21.689 } 00:17:21.689 }, 00:17:21.689 { 00:17:21.689 "method": "nvmf_set_crdt", 00:17:21.689 "params": { 00:17:21.689 "crdt1": 0, 00:17:21.689 "crdt2": 0, 00:17:21.689 "crdt3": 0 00:17:21.689 } 00:17:21.689 }, 00:17:21.689 { 00:17:21.689 "method": "nvmf_create_transport", 00:17:21.689 "params": { 00:17:21.689 "abort_timeout_sec": 1, 00:17:21.689 "buf_cache_size": 4294967295, 00:17:21.689 "c2h_success": false, 00:17:21.689 "dif_insert_or_strip": false, 00:17:21.689 "in_capsule_data_size": 4096, 00:17:21.689 "io_unit_size": 131072, 00:17:21.689 "max_aq_depth": 128, 00:17:21.689 "max_io_qpairs_per_ctrlr": 127, 00:17:21.689 "max_io_size": 131072, 00:17:21.689 "max_queue_depth": 128, 00:17:21.689 "num_shared_buffers": 511, 00:17:21.689 "sock_priority": 0, 00:17:21.689 "trtype": "TCP", 00:17:21.689 "zcopy": false 00:17:21.689 } 00:17:21.689 }, 00:17:21.689 { 00:17:21.689 "method": "nvmf_create_subsystem", 00:17:21.689 "params": { 00:17:21.689 "allow_any_host": false, 00:17:21.689 "ana_reporting": false, 00:17:21.689 "max_cntlid": 65519, 00:17:21.689 "max_namespaces": 10, 00:17:21.689 "min_cntlid": 1, 00:17:21.689 "model_number": "SPDK bdev Controller", 00:17:21.689 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.689 "serial_number": "SPDK00000000000001" 00:17:21.689 } 00:17:21.689 }, 00:17:21.689 { 00:17:21.689 "method": "nvmf_subsystem_add_host", 00:17:21.689 "params": { 00:17:21.689 "host": "nqn.2016-06.io.spdk:host1", 00:17:21.689 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.689 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:21.689 } 00:17:21.689 }, 00:17:21.689 { 00:17:21.689 "method": "nvmf_subsystem_add_ns", 00:17:21.689 "params": { 00:17:21.689 "namespace": { 00:17:21.689 "bdev_name": "malloc0", 00:17:21.689 "nguid": "DC7E0ABCFF8249E3A8BF0F42D04C7DA8", 00:17:21.689 "nsid": 1, 00:17:21.689 "uuid": "dc7e0abc-ff82-49e3-a8bf-0f42d04c7da8" 00:17:21.689 }, 00:17:21.689 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:21.689 } 00:17:21.689 }, 00:17:21.689 { 00:17:21.689 "method": "nvmf_subsystem_add_listener", 00:17:21.689 "params": { 00:17:21.689 "listen_address": { 00:17:21.689 "adrfam": "IPv4", 00:17:21.689 "traddr": "10.0.0.2", 00:17:21.689 "trsvcid": "4420", 00:17:21.689 "trtype": "TCP" 00:17:21.689 }, 00:17:21.689 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.689 "secure_channel": true 00:17:21.689 } 00:17:21.689 } 00:17:21.689 ] 00:17:21.689 } 00:17:21.689 ] 00:17:21.689 }' 00:17:21.948 22:37:22 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:22.207 22:37:22 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:22.207 "subsystems": [ 00:17:22.207 { 00:17:22.207 "subsystem": "iobuf", 00:17:22.207 "config": [ 00:17:22.207 { 00:17:22.207 "method": "iobuf_set_options", 00:17:22.207 "params": { 00:17:22.207 "large_bufsize": 135168, 00:17:22.207 "large_pool_count": 1024, 00:17:22.207 "small_bufsize": 8192, 00:17:22.207 "small_pool_count": 8192 00:17:22.207 } 00:17:22.207 } 00:17:22.207 ] 00:17:22.207 }, 00:17:22.207 { 00:17:22.207 "subsystem": "sock", 00:17:22.207 "config": [ 00:17:22.207 { 00:17:22.207 "method": "sock_impl_set_options", 00:17:22.207 "params": { 00:17:22.207 "enable_ktls": false, 00:17:22.207 "enable_placement_id": 0, 00:17:22.207 "enable_quickack": false, 00:17:22.207 "enable_recv_pipe": true, 00:17:22.207 "enable_zerocopy_send_client": false, 00:17:22.207 "enable_zerocopy_send_server": true, 00:17:22.207 "impl_name": "posix", 00:17:22.207 "recv_buf_size": 2097152, 00:17:22.207 "send_buf_size": 2097152, 00:17:22.207 "tls_version": 0, 00:17:22.207 "zerocopy_threshold": 0 00:17:22.207 } 00:17:22.207 }, 00:17:22.207 { 00:17:22.207 "method": "sock_impl_set_options", 00:17:22.207 "params": { 00:17:22.207 "enable_ktls": false, 00:17:22.207 "enable_placement_id": 0, 00:17:22.207 "enable_quickack": false, 00:17:22.207 "enable_recv_pipe": true, 00:17:22.207 "enable_zerocopy_send_client": false, 00:17:22.207 "enable_zerocopy_send_server": true, 00:17:22.207 "impl_name": "ssl", 00:17:22.207 "recv_buf_size": 4096, 00:17:22.207 "send_buf_size": 4096, 00:17:22.207 "tls_version": 0, 00:17:22.207 "zerocopy_threshold": 0 00:17:22.207 } 00:17:22.207 } 00:17:22.207 ] 00:17:22.207 }, 00:17:22.207 { 00:17:22.207 "subsystem": "vmd", 00:17:22.207 "config": [] 00:17:22.207 }, 00:17:22.207 { 00:17:22.207 "subsystem": "accel", 00:17:22.207 "config": [ 00:17:22.207 { 00:17:22.207 "method": "accel_set_options", 00:17:22.207 "params": { 00:17:22.207 "buf_count": 2048, 00:17:22.207 "large_cache_size": 16, 00:17:22.207 "sequence_count": 2048, 00:17:22.207 "small_cache_size": 128, 00:17:22.207 "task_count": 2048 00:17:22.207 } 00:17:22.207 } 00:17:22.207 ] 00:17:22.207 }, 00:17:22.207 { 00:17:22.207 "subsystem": "bdev", 00:17:22.207 "config": [ 00:17:22.207 { 00:17:22.207 "method": "bdev_set_options", 00:17:22.207 "params": { 00:17:22.207 "bdev_auto_examine": true, 00:17:22.207 "bdev_io_cache_size": 256, 00:17:22.207 "bdev_io_pool_size": 65535, 00:17:22.207 "iobuf_large_cache_size": 16, 00:17:22.207 "iobuf_small_cache_size": 128 00:17:22.207 } 00:17:22.207 }, 00:17:22.207 { 00:17:22.207 "method": "bdev_raid_set_options", 00:17:22.207 "params": { 00:17:22.207 "process_window_size_kb": 1024 00:17:22.207 } 00:17:22.207 }, 00:17:22.207 { 00:17:22.207 "method": "bdev_iscsi_set_options", 00:17:22.207 "params": { 00:17:22.207 "timeout_sec": 30 00:17:22.207 } 00:17:22.207 }, 00:17:22.208 { 00:17:22.208 "method": "bdev_nvme_set_options", 00:17:22.208 "params": { 00:17:22.208 "action_on_timeout": "none", 00:17:22.208 "allow_accel_sequence": false, 00:17:22.208 "arbitration_burst": 0, 00:17:22.208 "bdev_retry_count": 3, 00:17:22.208 "ctrlr_loss_timeout_sec": 0, 00:17:22.208 "delay_cmd_submit": true, 00:17:22.208 "fast_io_fail_timeout_sec": 0, 00:17:22.208 "generate_uuids": false, 00:17:22.208 "high_priority_weight": 0, 00:17:22.208 "io_path_stat": false, 00:17:22.208 "io_queue_requests": 512, 00:17:22.208 "keep_alive_timeout_ms": 10000, 00:17:22.208 "low_priority_weight": 0, 00:17:22.208 "medium_priority_weight": 0, 00:17:22.208 "nvme_adminq_poll_period_us": 10000, 00:17:22.208 "nvme_ioq_poll_period_us": 0, 00:17:22.208 "reconnect_delay_sec": 0, 00:17:22.208 "timeout_admin_us": 0, 00:17:22.208 "timeout_us": 0, 00:17:22.208 "transport_ack_timeout": 0, 00:17:22.208 "transport_retry_count": 4, 00:17:22.208 "transport_tos": 0 00:17:22.208 } 00:17:22.208 }, 00:17:22.208 { 00:17:22.208 "method": "bdev_nvme_attach_controller", 00:17:22.208 "params": { 00:17:22.208 "adrfam": "IPv4", 00:17:22.208 "ctrlr_loss_timeout_sec": 0, 00:17:22.208 "ddgst": false, 00:17:22.208 "fast_io_fail_timeout_sec": 0, 00:17:22.208 "hdgst": false, 00:17:22.208 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:22.208 "name": "TLSTEST", 00:17:22.208 "prchk_guard": false, 00:17:22.208 "prchk_reftag": false, 00:17:22.208 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:22.208 "reconnect_delay_sec": 0, 00:17:22.208 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:22.208 "traddr": "10.0.0.2", 00:17:22.208 "trsvcid": "4420", 00:17:22.208 "trtype": "TCP" 00:17:22.208 } 00:17:22.208 }, 00:17:22.208 { 00:17:22.208 "method": "bdev_nvme_set_hotplug", 00:17:22.208 "params": { 00:17:22.208 "enable": false, 00:17:22.208 "period_us": 100000 00:17:22.208 } 00:17:22.208 }, 00:17:22.208 { 00:17:22.208 "method": "bdev_wait_for_examine" 00:17:22.208 } 00:17:22.208 ] 00:17:22.208 }, 00:17:22.208 { 00:17:22.208 "subsystem": "nbd", 00:17:22.208 "config": [] 00:17:22.208 } 00:17:22.208 ] 00:17:22.208 }' 00:17:22.208 22:37:22 -- target/tls.sh@208 -- # killprocess 89369 00:17:22.208 22:37:22 -- common/autotest_common.sh@936 -- # '[' -z 89369 ']' 00:17:22.208 22:37:22 -- common/autotest_common.sh@940 -- # kill -0 89369 00:17:22.208 22:37:22 -- common/autotest_common.sh@941 -- # uname 00:17:22.208 22:37:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:22.208 22:37:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89369 00:17:22.208 22:37:22 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:22.208 22:37:22 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:22.208 killing process with pid 89369 00:17:22.208 22:37:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89369' 00:17:22.208 22:37:22 -- common/autotest_common.sh@955 -- # kill 89369 00:17:22.208 22:37:22 -- common/autotest_common.sh@960 -- # wait 89369 00:17:22.208 Received shutdown signal, test time was about 10.000000 seconds 00:17:22.208 00:17:22.208 Latency(us) 00:17:22.208 [2024-11-20T22:37:22.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.208 [2024-11-20T22:37:22.942Z] =================================================================================================================== 00:17:22.208 [2024-11-20T22:37:22.942Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:22.467 22:37:22 -- target/tls.sh@209 -- # killprocess 89272 00:17:22.467 22:37:22 -- common/autotest_common.sh@936 -- # '[' -z 89272 ']' 00:17:22.467 22:37:22 -- common/autotest_common.sh@940 -- # kill -0 89272 00:17:22.467 22:37:22 -- common/autotest_common.sh@941 -- # uname 00:17:22.467 22:37:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:22.467 22:37:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89272 00:17:22.467 22:37:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:22.467 22:37:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:22.467 killing process with pid 89272 00:17:22.467 22:37:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89272' 00:17:22.467 22:37:22 -- common/autotest_common.sh@955 -- # kill 89272 00:17:22.467 22:37:22 -- common/autotest_common.sh@960 -- # wait 89272 00:17:22.726 22:37:23 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:22.726 22:37:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:22.726 22:37:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:22.726 22:37:23 -- target/tls.sh@212 -- # echo '{ 00:17:22.726 "subsystems": [ 00:17:22.726 { 00:17:22.726 "subsystem": "iobuf", 00:17:22.726 "config": [ 00:17:22.726 { 00:17:22.726 "method": "iobuf_set_options", 00:17:22.726 "params": { 00:17:22.726 "large_bufsize": 135168, 00:17:22.726 "large_pool_count": 1024, 00:17:22.726 "small_bufsize": 8192, 00:17:22.726 "small_pool_count": 8192 00:17:22.726 } 00:17:22.726 } 00:17:22.726 ] 00:17:22.726 }, 00:17:22.726 { 00:17:22.726 "subsystem": "sock", 00:17:22.726 "config": [ 00:17:22.726 { 00:17:22.726 "method": "sock_impl_set_options", 00:17:22.726 "params": { 00:17:22.726 "enable_ktls": false, 00:17:22.726 "enable_placement_id": 0, 00:17:22.726 "enable_quickack": false, 00:17:22.726 "enable_recv_pipe": true, 00:17:22.726 "enable_zerocopy_send_client": false, 00:17:22.726 "enable_zerocopy_send_server": true, 00:17:22.726 "impl_name": "posix", 00:17:22.726 "recv_buf_size": 2097152, 00:17:22.726 "send_buf_size": 2097152, 00:17:22.726 "tls_version": 0, 00:17:22.726 "zerocopy_threshold": 0 00:17:22.726 } 00:17:22.726 }, 00:17:22.726 { 00:17:22.726 "method": "sock_impl_set_options", 00:17:22.726 "params": { 00:17:22.726 "enable_ktls": false, 00:17:22.726 "enable_placement_id": 0, 00:17:22.726 "enable_quickack": false, 00:17:22.726 "enable_recv_pipe": true, 00:17:22.726 "enable_zerocopy_send_client": false, 00:17:22.726 "enable_zerocopy_send_server": true, 00:17:22.726 "impl_name": "ssl", 00:17:22.726 "recv_buf_size": 4096, 00:17:22.726 "send_buf_size": 4096, 00:17:22.726 "tls_version": 0, 00:17:22.726 "zerocopy_threshold": 0 00:17:22.726 } 00:17:22.726 } 00:17:22.726 ] 00:17:22.726 }, 00:17:22.726 { 00:17:22.726 "subsystem": "vmd", 00:17:22.726 "config": [] 00:17:22.726 }, 00:17:22.726 { 00:17:22.726 "subsystem": "accel", 00:17:22.726 "config": [ 00:17:22.726 { 00:17:22.726 "method": "accel_set_options", 00:17:22.726 "params": { 00:17:22.726 "buf_count": 2048, 00:17:22.726 "large_cache_size": 16, 00:17:22.726 "sequence_count": 2048, 00:17:22.726 "small_cache_size": 128, 00:17:22.726 "task_count": 2048 00:17:22.726 } 00:17:22.726 } 00:17:22.726 ] 00:17:22.726 }, 00:17:22.726 { 00:17:22.726 "subsystem": "bdev", 00:17:22.726 "config": [ 00:17:22.726 { 00:17:22.726 "method": "bdev_set_options", 00:17:22.726 "params": { 00:17:22.726 "bdev_auto_examine": true, 00:17:22.726 "bdev_io_cache_size": 256, 00:17:22.726 "bdev_io_pool_size": 65535, 00:17:22.726 "iobuf_large_cache_size": 16, 00:17:22.726 "iobuf_small_cache_size": 128 00:17:22.726 } 00:17:22.726 }, 00:17:22.726 { 00:17:22.726 "method": "bdev_raid_set_options", 00:17:22.726 "params": { 00:17:22.726 "process_window_size_kb": 1024 00:17:22.726 } 00:17:22.726 }, 00:17:22.726 { 00:17:22.726 "method": "bdev_iscsi_set_options", 00:17:22.726 "params": { 00:17:22.726 "timeout_sec": 30 00:17:22.726 } 00:17:22.726 }, 00:17:22.726 { 00:17:22.726 "method": "bdev_nvme_set_options", 00:17:22.726 "params": { 00:17:22.726 "action_on_timeout": "none", 00:17:22.726 "allow_accel_sequence": false, 00:17:22.726 "arbitration_burst": 0, 00:17:22.726 "bdev_retry_count": 3, 00:17:22.726 "ctrlr_loss_timeout_sec": 0, 00:17:22.726 "delay_cmd_submit": true, 00:17:22.726 "fast_io_fail_timeout_sec": 0, 00:17:22.726 "generate_uuids": false, 00:17:22.726 "high_priority_weight": 0, 00:17:22.726 "io_path_stat": false, 00:17:22.726 "io_queue_requests": 0, 00:17:22.726 "keep_alive_timeout_ms": 10000, 00:17:22.726 "low_priority_weight": 0, 00:17:22.726 "medium_priority_weight": 0, 00:17:22.726 "nvme_adminq_poll_period_us": 10000, 00:17:22.726 "nvme_ioq_poll_period_us": 0, 00:17:22.726 "reconnect_delay_sec": 0, 00:17:22.726 "timeout_admin_us": 0, 00:17:22.726 "timeout_us": 0, 00:17:22.726 "transport_ack_timeout": 0, 00:17:22.726 "transport_retry_count": 4, 00:17:22.726 "transport_tos": 0 00:17:22.726 } 00:17:22.726 }, 00:17:22.726 { 00:17:22.726 "method": "bdev_nvme_set_hotplug", 00:17:22.726 "params": { 00:17:22.726 "enable": false, 00:17:22.726 "period_us": 100000 00:17:22.726 } 00:17:22.726 }, 00:17:22.726 { 00:17:22.726 "method": "bdev_malloc_create", 00:17:22.726 "params": { 00:17:22.726 "block_size": 4096, 00:17:22.726 "name": "malloc0", 00:17:22.726 "num_blocks": 8192, 00:17:22.726 "optimal_io_boundary": 0, 00:17:22.726 "physical_block_size": 4096, 00:17:22.726 "uuid": "dc7e0abc-ff82-49e3-a8bf-0f42d04c7da8" 00:17:22.726 } 00:17:22.726 }, 00:17:22.726 { 00:17:22.726 "method": "bdev_wait_for_examine" 00:17:22.726 } 00:17:22.726 ] 00:17:22.726 }, 00:17:22.726 { 00:17:22.727 "subsystem": "nbd", 00:17:22.727 "config": [] 00:17:22.727 }, 00:17:22.727 { 00:17:22.727 "subsystem": "scheduler", 00:17:22.727 "config": [ 00:17:22.727 { 00:17:22.727 "method": "framework_set_scheduler", 00:17:22.727 "params": { 00:17:22.727 "name": "static" 00:17:22.727 } 00:17:22.727 } 00:17:22.727 ] 00:17:22.727 }, 00:17:22.727 { 00:17:22.727 "subsystem": "nvmf", 00:17:22.727 "config": [ 00:17:22.727 { 00:17:22.727 "method": "nvmf_set_config", 00:17:22.727 "params": { 00:17:22.727 "admin_cmd_passthru": { 00:17:22.727 "identify_ctrlr": false 00:17:22.727 }, 00:17:22.727 "discovery_filter": "match_any" 00:17:22.727 } 00:17:22.727 }, 00:17:22.727 { 00:17:22.727 "method": "nvmf_set_max_subsystems", 00:17:22.727 "params": { 00:17:22.727 "max_subsystems": 1024 00:17:22.727 } 00:17:22.727 }, 00:17:22.727 { 00:17:22.727 "method": "nvmf_set_crdt", 00:17:22.727 "params": { 00:17:22.727 "crdt1": 0, 00:17:22.727 "crdt2": 0, 00:17:22.727 "crdt3": 0 00:17:22.727 } 00:17:22.727 }, 00:17:22.727 { 00:17:22.727 "method": "nvmf_create_transport", 00:17:22.727 "params": { 00:17:22.727 "abort_timeout_sec": 1, 00:17:22.727 "buf_cache_size": 4294967295, 00:17:22.727 "c2h_success": false, 00:17:22.727 "dif_insert_or_strip": false, 00:17:22.727 "in_capsule_data_size": 4096, 00:17:22.727 "io_unit_size": 131072, 00:17:22.727 "max_aq_depth": 128, 00:17:22.727 "max_io_qpairs_per_ctrlr": 127, 00:17:22.727 "max_io_size": 131072, 00:17:22.727 "max_queue_depth": 128, 00:17:22.727 "num_shared_buffers": 511, 00:17:22.727 "sock_priority": 0, 00:17:22.727 "trtype": "TCP", 00:17:22.727 "zcopy": false 00:17:22.727 } 00:17:22.727 }, 00:17:22.727 { 00:17:22.727 "method": "nvmf_create_subsystem", 00:17:22.727 "params": { 00:17:22.727 "allow_any_host": false, 00:17:22.727 "ana_reporting": false, 00:17:22.727 "max_cntlid": 65519, 00:17:22.727 "max_namespaces": 10, 00:17:22.727 "min_cntlid": 1, 00:17:22.727 "model_number": "SPDK bdev Controller", 00:17:22.727 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:22.727 "serial_number": "SPDK00000000000001" 00:17:22.727 } 00:17:22.727 }, 00:17:22.727 { 00:17:22.727 "method": "nvmf_subsystem_add_host", 00:17:22.727 "params": { 00:17:22.727 "host": "nqn.2016-06.io.spdk:host1", 00:17:22.727 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:22.727 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:22.727 } 00:17:22.727 }, 00:17:22.727 { 00:17:22.727 "method": "nvmf_subsystem_add_ns", 00:17:22.727 "params": { 00:17:22.727 "namespace": { 00:17:22.727 "bdev_name": "malloc0", 00:17:22.727 "nguid": "DC7E0ABCFF8249E3A8BF0F42D04C7DA8", 00:17:22.727 "nsid": 1, 00:17:22.727 "uuid": "dc7e0abc-ff82-49e3-a8bf-0f42d04c7da8" 00:17:22.727 }, 00:17:22.727 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:22.727 } 00:17:22.727 }, 00:17:22.727 { 00:17:22.727 "method": "nvmf_subsystem_add_listener", 00:17:22.727 "params": { 00:17:22.727 "listen_address": { 00:17:22.727 "adrfam": "IPv4", 00:17:22.727 "traddr": "10.0.0.2", 00:17:22.727 "trsvcid": "4420", 00:17:22.727 "trtype": "TCP" 00:17:22.727 }, 00:17:22.727 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:22.727 "secure_channel": true 00:17:22.727 } 00:17:22.727 } 00:17:22.727 ] 00:17:22.727 } 00:17:22.727 ] 00:17:22.727 }' 00:17:22.727 22:37:23 -- common/autotest_common.sh@10 -- # set +x 00:17:22.727 22:37:23 -- nvmf/common.sh@469 -- # nvmfpid=89443 00:17:22.727 22:37:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:22.727 22:37:23 -- nvmf/common.sh@470 -- # waitforlisten 89443 00:17:22.727 22:37:23 -- common/autotest_common.sh@829 -- # '[' -z 89443 ']' 00:17:22.727 22:37:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.727 22:37:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.727 22:37:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.727 22:37:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.727 22:37:23 -- common/autotest_common.sh@10 -- # set +x 00:17:22.727 [2024-11-20 22:37:23.298400] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:22.727 [2024-11-20 22:37:23.298499] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.727 [2024-11-20 22:37:23.438018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.986 [2024-11-20 22:37:23.495237] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:22.986 [2024-11-20 22:37:23.495411] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.986 [2024-11-20 22:37:23.495424] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.986 [2024-11-20 22:37:23.495432] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.986 [2024-11-20 22:37:23.495457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.245 [2024-11-20 22:37:23.738800] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.245 [2024-11-20 22:37:23.770766] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:23.245 [2024-11-20 22:37:23.771010] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.813 22:37:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.813 22:37:24 -- common/autotest_common.sh@862 -- # return 0 00:17:23.813 22:37:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:23.813 22:37:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:23.813 22:37:24 -- common/autotest_common.sh@10 -- # set +x 00:17:23.813 22:37:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:23.813 22:37:24 -- target/tls.sh@216 -- # bdevperf_pid=89487 00:17:23.813 22:37:24 -- target/tls.sh@217 -- # waitforlisten 89487 /var/tmp/bdevperf.sock 00:17:23.813 22:37:24 -- common/autotest_common.sh@829 -- # '[' -z 89487 ']' 00:17:23.813 22:37:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:23.813 22:37:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:23.813 22:37:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:23.813 22:37:24 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:23.813 22:37:24 -- target/tls.sh@213 -- # echo '{ 00:17:23.814 "subsystems": [ 00:17:23.814 { 00:17:23.814 "subsystem": "iobuf", 00:17:23.814 "config": [ 00:17:23.814 { 00:17:23.814 "method": "iobuf_set_options", 00:17:23.814 "params": { 00:17:23.814 "large_bufsize": 135168, 00:17:23.814 "large_pool_count": 1024, 00:17:23.814 "small_bufsize": 8192, 00:17:23.814 "small_pool_count": 8192 00:17:23.814 } 00:17:23.814 } 00:17:23.814 ] 00:17:23.814 }, 00:17:23.814 { 00:17:23.814 "subsystem": "sock", 00:17:23.814 "config": [ 00:17:23.814 { 00:17:23.814 "method": "sock_impl_set_options", 00:17:23.814 "params": { 00:17:23.814 "enable_ktls": false, 00:17:23.814 "enable_placement_id": 0, 00:17:23.814 "enable_quickack": false, 00:17:23.814 "enable_recv_pipe": true, 00:17:23.814 "enable_zerocopy_send_client": false, 00:17:23.814 "enable_zerocopy_send_server": true, 00:17:23.814 "impl_name": "posix", 00:17:23.814 "recv_buf_size": 2097152, 00:17:23.814 "send_buf_size": 2097152, 00:17:23.814 "tls_version": 0, 00:17:23.814 "zerocopy_threshold": 0 00:17:23.814 } 00:17:23.814 }, 00:17:23.814 { 00:17:23.814 "method": "sock_impl_set_options", 00:17:23.814 "params": { 00:17:23.814 "enable_ktls": false, 00:17:23.814 "enable_placement_id": 0, 00:17:23.814 "enable_quickack": false, 00:17:23.814 "enable_recv_pipe": true, 00:17:23.814 "enable_zerocopy_send_client": false, 00:17:23.814 "enable_zerocopy_send_server": true, 00:17:23.814 "impl_name": "ssl", 00:17:23.814 "recv_buf_size": 4096, 00:17:23.814 "send_buf_size": 4096, 00:17:23.814 "tls_version": 0, 00:17:23.814 "zerocopy_threshold": 0 00:17:23.814 } 00:17:23.814 } 00:17:23.814 ] 00:17:23.814 }, 00:17:23.814 { 00:17:23.814 "subsystem": "vmd", 00:17:23.814 "config": [] 00:17:23.814 }, 00:17:23.814 { 00:17:23.814 "subsystem": "accel", 00:17:23.814 "config": [ 00:17:23.814 { 00:17:23.814 "method": "accel_set_options", 00:17:23.814 "params": { 00:17:23.814 "buf_count": 2048, 00:17:23.814 "large_cache_size": 16, 00:17:23.814 "sequence_count": 2048, 00:17:23.814 "small_cache_size": 128, 00:17:23.814 "task_count": 2048 00:17:23.814 } 00:17:23.814 } 00:17:23.814 ] 00:17:23.814 }, 00:17:23.814 { 00:17:23.814 "subsystem": "bdev", 00:17:23.814 "config": [ 00:17:23.814 { 00:17:23.814 "method": "bdev_set_options", 00:17:23.814 "params": { 00:17:23.814 "bdev_auto_examine": true, 00:17:23.814 "bdev_io_cache_size": 256, 00:17:23.814 "bdev_io_pool_size": 65535, 00:17:23.814 "iobuf_large_cache_size": 16, 00:17:23.814 "iobuf_small_cache_size": 128 00:17:23.814 } 00:17:23.814 }, 00:17:23.814 { 00:17:23.814 "method": "bdev_raid_set_options", 00:17:23.814 "params": { 00:17:23.814 "process_window_size_kb": 1024 00:17:23.814 } 00:17:23.814 }, 00:17:23.814 { 00:17:23.814 "method": "bdev_iscsi_set_options", 00:17:23.814 "params": { 00:17:23.814 "timeout_sec": 30 00:17:23.814 } 00:17:23.814 }, 00:17:23.814 { 00:17:23.814 "method": "bdev_nvme_set_options", 00:17:23.814 "params": { 00:17:23.814 "action_on_timeout": "none", 00:17:23.814 "allow_accel_sequence": false, 00:17:23.814 "arbitration_burst": 0, 00:17:23.814 "bdev_retry_count": 3, 00:17:23.814 "ctrlr_loss_timeout_sec": 0, 00:17:23.814 "delay_cmd_submit": true, 00:17:23.814 "fast_io_fail_timeout_sec": 0, 00:17:23.814 "generate_uuids": false, 00:17:23.814 "high_priority_weight": 0, 00:17:23.814 "io_path_stat": false, 00:17:23.814 "io_queue_requests": 512, 00:17:23.814 "keep_alive_timeout_ms": 10000, 00:17:23.814 "low_priority_weight": 0, 00:17:23.814 "medium_priority_weight": 0, 00:17:23.814 "nvme_adminq_poll_period_us": 10000, 00:17:23.814 "nvme_ioq_poll_period_us": 0, 00:17:23.814 "reconnect_delay_sec": 0, 00:17:23.814 "timeout_admin_us": 0, 00:17:23.814 "timeout_us": 0, 00:17:23.814 "transport_ack_timeout": 0, 00:17:23.814 "transport_retry_count": 4, 00:17:23.814 "transport_tos": 0 00:17:23.814 } 00:17:23.814 }, 00:17:23.814 { 00:17:23.814 "method": "bdev_nvme_attach_controller", 00:17:23.814 "params": { 00:17:23.814 "adrfam": "IPv4", 00:17:23.814 "ctrlr_loss_timeout_sec": 0, 00:17:23.814 "ddgst": false, 00:17:23.814 "fast_io_fail_timeout_sec": 0, 00:17:23.814 "hdgst": false, 00:17:23.814 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:23.814 "name": "TLSTEST", 00:17:23.814 "prchk_guard": false, 00:17:23.814 "prchk_reftag": false, 00:17:23.814 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:23.814 "reconnect_delay_sec": 0, 00:17:23.814 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:23.814 "traddr": "10.0.0.2", 00:17:23.814 "trsvcid": "4420", 00:17:23.814 "trtype": "TCP" 00:17:23.814 } 00:17:23.814 }, 00:17:23.814 { 00:17:23.814 "method": "bdev_nvme_set_hotplug", 00:17:23.814 "params": { 00:17:23.814 "enable": false, 00:17:23.814 "period_us": 100000 00:17:23.814 } 00:17:23.814 }, 00:17:23.814 { 00:17:23.814 "method": "bdev_wait_for_examine" 00:17:23.814 } 00:17:23.814 ] 00:17:23.814 }, 00:17:23.814 { 00:17:23.814 "subsystem": "nbd", 00:17:23.814 "config": [] 00:17:23.814 } 00:17:23.814 ] 00:17:23.814 }' 00:17:23.814 22:37:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:23.814 22:37:24 -- common/autotest_common.sh@10 -- # set +x 00:17:23.814 [2024-11-20 22:37:24.343164] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:23.814 [2024-11-20 22:37:24.343246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89487 ] 00:17:23.814 [2024-11-20 22:37:24.475754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.814 [2024-11-20 22:37:24.539020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.073 [2024-11-20 22:37:24.686894] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:24.640 22:37:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:24.640 22:37:25 -- common/autotest_common.sh@862 -- # return 0 00:17:24.640 22:37:25 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:24.640 Running I/O for 10 seconds... 00:17:36.847 00:17:36.847 Latency(us) 00:17:36.847 [2024-11-20T22:37:37.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.847 [2024-11-20T22:37:37.581Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:36.847 Verification LBA range: start 0x0 length 0x2000 00:17:36.847 TLSTESTn1 : 10.01 6494.71 25.37 0.00 0.00 19678.51 4468.36 263097.25 00:17:36.847 [2024-11-20T22:37:37.581Z] =================================================================================================================== 00:17:36.847 [2024-11-20T22:37:37.581Z] Total : 6494.71 25.37 0.00 0.00 19678.51 4468.36 263097.25 00:17:36.847 0 00:17:36.847 22:37:35 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:36.847 22:37:35 -- target/tls.sh@223 -- # killprocess 89487 00:17:36.847 22:37:35 -- common/autotest_common.sh@936 -- # '[' -z 89487 ']' 00:17:36.847 22:37:35 -- common/autotest_common.sh@940 -- # kill -0 89487 00:17:36.847 22:37:35 -- common/autotest_common.sh@941 -- # uname 00:17:36.847 22:37:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:36.847 22:37:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89487 00:17:36.847 22:37:35 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:36.847 22:37:35 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:36.847 killing process with pid 89487 00:17:36.847 22:37:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89487' 00:17:36.847 22:37:35 -- common/autotest_common.sh@955 -- # kill 89487 00:17:36.847 Received shutdown signal, test time was about 10.000000 seconds 00:17:36.847 00:17:36.847 Latency(us) 00:17:36.847 [2024-11-20T22:37:37.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.847 [2024-11-20T22:37:37.581Z] =================================================================================================================== 00:17:36.847 [2024-11-20T22:37:37.581Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:36.847 22:37:35 -- common/autotest_common.sh@960 -- # wait 89487 00:17:36.847 22:37:35 -- target/tls.sh@224 -- # killprocess 89443 00:17:36.847 22:37:35 -- common/autotest_common.sh@936 -- # '[' -z 89443 ']' 00:17:36.847 22:37:35 -- common/autotest_common.sh@940 -- # kill -0 89443 00:17:36.847 22:37:35 -- common/autotest_common.sh@941 -- # uname 00:17:36.847 22:37:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:36.847 22:37:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89443 00:17:36.847 22:37:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:36.847 22:37:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:36.847 killing process with pid 89443 00:17:36.847 22:37:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89443' 00:17:36.847 22:37:35 -- common/autotest_common.sh@955 -- # kill 89443 00:17:36.847 22:37:35 -- common/autotest_common.sh@960 -- # wait 89443 00:17:36.847 22:37:35 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:17:36.847 22:37:35 -- target/tls.sh@227 -- # cleanup 00:17:36.847 22:37:35 -- target/tls.sh@15 -- # process_shm --id 0 00:17:36.847 22:37:35 -- common/autotest_common.sh@806 -- # type=--id 00:17:36.847 22:37:35 -- common/autotest_common.sh@807 -- # id=0 00:17:36.847 22:37:35 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:36.847 22:37:35 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:36.847 22:37:35 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:36.847 22:37:35 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:36.847 22:37:35 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:36.847 22:37:35 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:36.847 nvmf_trace.0 00:17:36.847 22:37:35 -- common/autotest_common.sh@821 -- # return 0 00:17:36.847 22:37:35 -- target/tls.sh@16 -- # killprocess 89487 00:17:36.847 22:37:35 -- common/autotest_common.sh@936 -- # '[' -z 89487 ']' 00:17:36.847 22:37:35 -- common/autotest_common.sh@940 -- # kill -0 89487 00:17:36.847 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89487) - No such process 00:17:36.847 Process with pid 89487 is not found 00:17:36.847 22:37:35 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89487 is not found' 00:17:36.847 22:37:35 -- target/tls.sh@17 -- # nvmftestfini 00:17:36.847 22:37:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:36.847 22:37:35 -- nvmf/common.sh@116 -- # sync 00:17:36.847 22:37:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:36.847 22:37:36 -- nvmf/common.sh@119 -- # set +e 00:17:36.847 22:37:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:36.847 22:37:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:36.847 rmmod nvme_tcp 00:17:36.847 rmmod nvme_fabrics 00:17:36.847 rmmod nvme_keyring 00:17:36.847 22:37:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:36.847 22:37:36 -- nvmf/common.sh@123 -- # set -e 00:17:36.847 22:37:36 -- nvmf/common.sh@124 -- # return 0 00:17:36.847 22:37:36 -- nvmf/common.sh@477 -- # '[' -n 89443 ']' 00:17:36.847 22:37:36 -- nvmf/common.sh@478 -- # killprocess 89443 00:17:36.847 22:37:36 -- common/autotest_common.sh@936 -- # '[' -z 89443 ']' 00:17:36.847 22:37:36 -- common/autotest_common.sh@940 -- # kill -0 89443 00:17:36.847 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89443) - No such process 00:17:36.847 Process with pid 89443 is not found 00:17:36.847 22:37:36 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89443 is not found' 00:17:36.847 22:37:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:36.847 22:37:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:36.847 22:37:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:36.847 22:37:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:36.847 22:37:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:36.847 22:37:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.847 22:37:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:36.847 22:37:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.847 22:37:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:36.847 22:37:36 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:36.847 ************************************ 00:17:36.847 END TEST nvmf_tls 00:17:36.847 ************************************ 00:17:36.847 00:17:36.847 real 1m10.134s 00:17:36.847 user 1m44.736s 00:17:36.847 sys 0m26.547s 00:17:36.847 22:37:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:36.847 22:37:36 -- common/autotest_common.sh@10 -- # set +x 00:17:36.847 22:37:36 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:36.847 22:37:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:36.847 22:37:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:36.847 22:37:36 -- common/autotest_common.sh@10 -- # set +x 00:17:36.847 ************************************ 00:17:36.847 START TEST nvmf_fips 00:17:36.847 ************************************ 00:17:36.847 22:37:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:36.847 * Looking for test storage... 00:17:36.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:36.847 22:37:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:36.847 22:37:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:36.847 22:37:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:36.847 22:37:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:36.847 22:37:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:36.847 22:37:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:36.847 22:37:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:36.847 22:37:36 -- scripts/common.sh@335 -- # IFS=.-: 00:17:36.847 22:37:36 -- scripts/common.sh@335 -- # read -ra ver1 00:17:36.847 22:37:36 -- scripts/common.sh@336 -- # IFS=.-: 00:17:36.847 22:37:36 -- scripts/common.sh@336 -- # read -ra ver2 00:17:36.847 22:37:36 -- scripts/common.sh@337 -- # local 'op=<' 00:17:36.847 22:37:36 -- scripts/common.sh@339 -- # ver1_l=2 00:17:36.847 22:37:36 -- scripts/common.sh@340 -- # ver2_l=1 00:17:36.847 22:37:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:36.847 22:37:36 -- scripts/common.sh@343 -- # case "$op" in 00:17:36.847 22:37:36 -- scripts/common.sh@344 -- # : 1 00:17:36.847 22:37:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:36.848 22:37:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:36.848 22:37:36 -- scripts/common.sh@364 -- # decimal 1 00:17:36.848 22:37:36 -- scripts/common.sh@352 -- # local d=1 00:17:36.848 22:37:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:36.848 22:37:36 -- scripts/common.sh@354 -- # echo 1 00:17:36.848 22:37:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:36.848 22:37:36 -- scripts/common.sh@365 -- # decimal 2 00:17:36.848 22:37:36 -- scripts/common.sh@352 -- # local d=2 00:17:36.848 22:37:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:36.848 22:37:36 -- scripts/common.sh@354 -- # echo 2 00:17:36.848 22:37:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:36.848 22:37:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:36.848 22:37:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:36.848 22:37:36 -- scripts/common.sh@367 -- # return 0 00:17:36.848 22:37:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:36.848 22:37:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:36.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.848 --rc genhtml_branch_coverage=1 00:17:36.848 --rc genhtml_function_coverage=1 00:17:36.848 --rc genhtml_legend=1 00:17:36.848 --rc geninfo_all_blocks=1 00:17:36.848 --rc geninfo_unexecuted_blocks=1 00:17:36.848 00:17:36.848 ' 00:17:36.848 22:37:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:36.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.848 --rc genhtml_branch_coverage=1 00:17:36.848 --rc genhtml_function_coverage=1 00:17:36.848 --rc genhtml_legend=1 00:17:36.848 --rc geninfo_all_blocks=1 00:17:36.848 --rc geninfo_unexecuted_blocks=1 00:17:36.848 00:17:36.848 ' 00:17:36.848 22:37:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:36.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.848 --rc genhtml_branch_coverage=1 00:17:36.848 --rc genhtml_function_coverage=1 00:17:36.848 --rc genhtml_legend=1 00:17:36.848 --rc geninfo_all_blocks=1 00:17:36.848 --rc geninfo_unexecuted_blocks=1 00:17:36.848 00:17:36.848 ' 00:17:36.848 22:37:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:36.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.848 --rc genhtml_branch_coverage=1 00:17:36.848 --rc genhtml_function_coverage=1 00:17:36.848 --rc genhtml_legend=1 00:17:36.848 --rc geninfo_all_blocks=1 00:17:36.848 --rc geninfo_unexecuted_blocks=1 00:17:36.848 00:17:36.848 ' 00:17:36.848 22:37:36 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:36.848 22:37:36 -- nvmf/common.sh@7 -- # uname -s 00:17:36.848 22:37:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.848 22:37:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.848 22:37:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.848 22:37:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.848 22:37:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.848 22:37:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.848 22:37:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.848 22:37:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.848 22:37:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.848 22:37:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.848 22:37:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:17:36.848 22:37:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:17:36.848 22:37:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.848 22:37:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.848 22:37:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:36.848 22:37:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:36.848 22:37:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.848 22:37:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.848 22:37:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.848 22:37:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.848 22:37:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.848 22:37:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.848 22:37:36 -- paths/export.sh@5 -- # export PATH 00:17:36.848 22:37:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.848 22:37:36 -- nvmf/common.sh@46 -- # : 0 00:17:36.848 22:37:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:36.848 22:37:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:36.848 22:37:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:36.848 22:37:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.848 22:37:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.848 22:37:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:36.848 22:37:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:36.848 22:37:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:36.848 22:37:36 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:36.848 22:37:36 -- fips/fips.sh@89 -- # check_openssl_version 00:17:36.848 22:37:36 -- fips/fips.sh@83 -- # local target=3.0.0 00:17:36.848 22:37:36 -- fips/fips.sh@85 -- # openssl version 00:17:36.848 22:37:36 -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:36.848 22:37:36 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:17:36.848 22:37:36 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:17:36.848 22:37:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:36.848 22:37:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:36.848 22:37:36 -- scripts/common.sh@335 -- # IFS=.-: 00:17:36.848 22:37:36 -- scripts/common.sh@335 -- # read -ra ver1 00:17:36.848 22:37:36 -- scripts/common.sh@336 -- # IFS=.-: 00:17:36.848 22:37:36 -- scripts/common.sh@336 -- # read -ra ver2 00:17:36.848 22:37:36 -- scripts/common.sh@337 -- # local 'op=>=' 00:17:36.848 22:37:36 -- scripts/common.sh@339 -- # ver1_l=3 00:17:36.848 22:37:36 -- scripts/common.sh@340 -- # ver2_l=3 00:17:36.848 22:37:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:36.848 22:37:36 -- scripts/common.sh@343 -- # case "$op" in 00:17:36.848 22:37:36 -- scripts/common.sh@347 -- # : 1 00:17:36.848 22:37:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:36.848 22:37:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:36.848 22:37:36 -- scripts/common.sh@364 -- # decimal 3 00:17:36.848 22:37:36 -- scripts/common.sh@352 -- # local d=3 00:17:36.848 22:37:36 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:36.848 22:37:36 -- scripts/common.sh@354 -- # echo 3 00:17:36.848 22:37:36 -- scripts/common.sh@364 -- # ver1[v]=3 00:17:36.848 22:37:36 -- scripts/common.sh@365 -- # decimal 3 00:17:36.848 22:37:36 -- scripts/common.sh@352 -- # local d=3 00:17:36.848 22:37:36 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:36.848 22:37:36 -- scripts/common.sh@354 -- # echo 3 00:17:36.848 22:37:36 -- scripts/common.sh@365 -- # ver2[v]=3 00:17:36.848 22:37:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:36.848 22:37:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:36.848 22:37:36 -- scripts/common.sh@363 -- # (( v++ )) 00:17:36.848 22:37:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:36.848 22:37:36 -- scripts/common.sh@364 -- # decimal 1 00:17:36.848 22:37:36 -- scripts/common.sh@352 -- # local d=1 00:17:36.848 22:37:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:36.848 22:37:36 -- scripts/common.sh@354 -- # echo 1 00:17:36.848 22:37:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:36.848 22:37:36 -- scripts/common.sh@365 -- # decimal 0 00:17:36.848 22:37:36 -- scripts/common.sh@352 -- # local d=0 00:17:36.848 22:37:36 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:36.848 22:37:36 -- scripts/common.sh@354 -- # echo 0 00:17:36.848 22:37:36 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:36.848 22:37:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:36.848 22:37:36 -- scripts/common.sh@366 -- # return 0 00:17:36.848 22:37:36 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:36.848 22:37:36 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:36.848 22:37:36 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:36.848 22:37:36 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:36.848 22:37:36 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:36.848 22:37:36 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:36.848 22:37:36 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:36.848 22:37:36 -- fips/fips.sh@113 -- # build_openssl_config 00:17:36.848 22:37:36 -- fips/fips.sh@37 -- # cat 00:17:36.848 22:37:36 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:36.848 22:37:36 -- fips/fips.sh@58 -- # cat - 00:17:36.848 22:37:36 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:36.848 22:37:36 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:17:36.848 22:37:36 -- fips/fips.sh@116 -- # mapfile -t providers 00:17:36.848 22:37:36 -- fips/fips.sh@116 -- # openssl list -providers 00:17:36.848 22:37:36 -- fips/fips.sh@116 -- # grep name 00:17:36.848 22:37:36 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:17:36.848 22:37:36 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:17:36.849 22:37:36 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:36.849 22:37:36 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:17:36.849 22:37:36 -- fips/fips.sh@127 -- # : 00:17:36.849 22:37:36 -- common/autotest_common.sh@650 -- # local es=0 00:17:36.849 22:37:36 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:36.849 22:37:36 -- common/autotest_common.sh@638 -- # local arg=openssl 00:17:36.849 22:37:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:36.849 22:37:36 -- common/autotest_common.sh@642 -- # type -t openssl 00:17:36.849 22:37:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:36.849 22:37:36 -- common/autotest_common.sh@644 -- # type -P openssl 00:17:36.849 22:37:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:36.849 22:37:36 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:17:36.849 22:37:36 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:17:36.849 22:37:36 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:17:36.849 Error setting digest 00:17:36.849 4032A0736F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:17:36.849 4032A0736F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:17:36.849 22:37:36 -- common/autotest_common.sh@653 -- # es=1 00:17:36.849 22:37:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:36.849 22:37:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:36.849 22:37:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:36.849 22:37:36 -- fips/fips.sh@130 -- # nvmftestinit 00:17:36.849 22:37:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:36.849 22:37:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.849 22:37:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:36.849 22:37:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:36.849 22:37:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:36.849 22:37:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.849 22:37:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:36.849 22:37:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.849 22:37:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:36.849 22:37:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:36.849 22:37:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:36.849 22:37:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:36.849 22:37:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:36.849 22:37:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:36.849 22:37:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.849 22:37:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:36.849 22:37:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:36.849 22:37:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:36.849 22:37:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:36.849 22:37:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:36.849 22:37:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:36.849 22:37:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.849 22:37:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:36.849 22:37:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:36.849 22:37:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:36.849 22:37:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:36.849 22:37:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:36.849 22:37:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:36.849 Cannot find device "nvmf_tgt_br" 00:17:36.849 22:37:36 -- nvmf/common.sh@154 -- # true 00:17:36.849 22:37:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:36.849 Cannot find device "nvmf_tgt_br2" 00:17:36.849 22:37:36 -- nvmf/common.sh@155 -- # true 00:17:36.849 22:37:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:36.849 22:37:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:36.849 Cannot find device "nvmf_tgt_br" 00:17:36.849 22:37:36 -- nvmf/common.sh@157 -- # true 00:17:36.849 22:37:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:36.849 Cannot find device "nvmf_tgt_br2" 00:17:36.849 22:37:36 -- nvmf/common.sh@158 -- # true 00:17:36.849 22:37:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:36.849 22:37:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:36.849 22:37:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:36.849 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:36.849 22:37:36 -- nvmf/common.sh@161 -- # true 00:17:36.849 22:37:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:36.849 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:36.849 22:37:36 -- nvmf/common.sh@162 -- # true 00:17:36.849 22:37:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:36.849 22:37:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:36.849 22:37:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:36.849 22:37:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:36.849 22:37:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:36.849 22:37:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:36.849 22:37:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:36.849 22:37:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:36.849 22:37:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:36.849 22:37:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:36.849 22:37:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:36.849 22:37:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:36.849 22:37:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:36.849 22:37:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:36.849 22:37:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:36.849 22:37:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:36.849 22:37:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:36.849 22:37:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:36.849 22:37:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:36.849 22:37:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:36.849 22:37:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:36.849 22:37:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:36.849 22:37:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:36.849 22:37:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:36.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:17:36.849 00:17:36.849 --- 10.0.0.2 ping statistics --- 00:17:36.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.849 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:17:36.849 22:37:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:36.849 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:36.849 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:17:36.849 00:17:36.849 --- 10.0.0.3 ping statistics --- 00:17:36.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.849 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:36.849 22:37:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:36.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:17:36.849 00:17:36.849 --- 10.0.0.1 ping statistics --- 00:17:36.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.849 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:17:36.849 22:37:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.849 22:37:36 -- nvmf/common.sh@421 -- # return 0 00:17:36.849 22:37:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:36.849 22:37:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.849 22:37:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:36.849 22:37:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:36.849 22:37:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.849 22:37:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:36.849 22:37:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:36.849 22:37:36 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:17:36.849 22:37:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:36.849 22:37:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:36.849 22:37:36 -- common/autotest_common.sh@10 -- # set +x 00:17:36.849 22:37:36 -- nvmf/common.sh@469 -- # nvmfpid=89856 00:17:36.849 22:37:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:36.849 22:37:36 -- nvmf/common.sh@470 -- # waitforlisten 89856 00:17:36.849 22:37:36 -- common/autotest_common.sh@829 -- # '[' -z 89856 ']' 00:17:36.849 22:37:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.849 22:37:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.849 22:37:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.849 22:37:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.849 22:37:36 -- common/autotest_common.sh@10 -- # set +x 00:17:36.849 [2024-11-20 22:37:37.036324] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:36.849 [2024-11-20 22:37:37.036410] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.849 [2024-11-20 22:37:37.174881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.849 [2024-11-20 22:37:37.259797] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:36.849 [2024-11-20 22:37:37.259922] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.849 [2024-11-20 22:37:37.259934] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.849 [2024-11-20 22:37:37.259941] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.849 [2024-11-20 22:37:37.259971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.417 22:37:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:37.417 22:37:38 -- common/autotest_common.sh@862 -- # return 0 00:17:37.417 22:37:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:37.417 22:37:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:37.417 22:37:38 -- common/autotest_common.sh@10 -- # set +x 00:17:37.417 22:37:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.417 22:37:38 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:17:37.417 22:37:38 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:37.417 22:37:38 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:37.417 22:37:38 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:37.417 22:37:38 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:37.417 22:37:38 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:37.417 22:37:38 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:37.417 22:37:38 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:37.675 [2024-11-20 22:37:38.342340] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.675 [2024-11-20 22:37:38.358291] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:37.675 [2024-11-20 22:37:38.358491] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.675 malloc0 00:17:37.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:37.934 22:37:38 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:37.934 22:37:38 -- fips/fips.sh@147 -- # bdevperf_pid=89908 00:17:37.934 22:37:38 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:37.934 22:37:38 -- fips/fips.sh@148 -- # waitforlisten 89908 /var/tmp/bdevperf.sock 00:17:37.934 22:37:38 -- common/autotest_common.sh@829 -- # '[' -z 89908 ']' 00:17:37.934 22:37:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:37.934 22:37:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:37.934 22:37:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:37.934 22:37:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:37.934 22:37:38 -- common/autotest_common.sh@10 -- # set +x 00:17:37.934 [2024-11-20 22:37:38.482517] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:37.934 [2024-11-20 22:37:38.482596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89908 ] 00:17:37.934 [2024-11-20 22:37:38.617934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.193 [2024-11-20 22:37:38.684582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.760 22:37:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.760 22:37:39 -- common/autotest_common.sh@862 -- # return 0 00:17:38.760 22:37:39 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:39.018 [2024-11-20 22:37:39.604608] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:39.018 TLSTESTn1 00:17:39.018 22:37:39 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:39.277 Running I/O for 10 seconds... 00:17:49.253 00:17:49.253 Latency(us) 00:17:49.253 [2024-11-20T22:37:49.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.253 [2024-11-20T22:37:49.987Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:49.253 Verification LBA range: start 0x0 length 0x2000 00:17:49.253 TLSTESTn1 : 10.01 6534.29 25.52 0.00 0.00 19557.88 5808.87 22163.08 00:17:49.253 [2024-11-20T22:37:49.987Z] =================================================================================================================== 00:17:49.253 [2024-11-20T22:37:49.987Z] Total : 6534.29 25.52 0.00 0.00 19557.88 5808.87 22163.08 00:17:49.253 0 00:17:49.253 22:37:49 -- fips/fips.sh@1 -- # cleanup 00:17:49.253 22:37:49 -- fips/fips.sh@15 -- # process_shm --id 0 00:17:49.253 22:37:49 -- common/autotest_common.sh@806 -- # type=--id 00:17:49.253 22:37:49 -- common/autotest_common.sh@807 -- # id=0 00:17:49.253 22:37:49 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:49.253 22:37:49 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:49.253 22:37:49 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:49.253 22:37:49 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:49.253 22:37:49 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:49.253 22:37:49 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:49.253 nvmf_trace.0 00:17:49.253 22:37:49 -- common/autotest_common.sh@821 -- # return 0 00:17:49.253 22:37:49 -- fips/fips.sh@16 -- # killprocess 89908 00:17:49.253 22:37:49 -- common/autotest_common.sh@936 -- # '[' -z 89908 ']' 00:17:49.253 22:37:49 -- common/autotest_common.sh@940 -- # kill -0 89908 00:17:49.253 22:37:49 -- common/autotest_common.sh@941 -- # uname 00:17:49.253 22:37:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:49.253 22:37:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89908 00:17:49.253 killing process with pid 89908 00:17:49.253 Received shutdown signal, test time was about 10.000000 seconds 00:17:49.253 00:17:49.253 Latency(us) 00:17:49.253 [2024-11-20T22:37:49.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.253 [2024-11-20T22:37:49.987Z] =================================================================================================================== 00:17:49.253 [2024-11-20T22:37:49.987Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:49.253 22:37:49 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:49.253 22:37:49 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:49.253 22:37:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89908' 00:17:49.253 22:37:49 -- common/autotest_common.sh@955 -- # kill 89908 00:17:49.253 22:37:49 -- common/autotest_common.sh@960 -- # wait 89908 00:17:49.511 22:37:50 -- fips/fips.sh@17 -- # nvmftestfini 00:17:49.511 22:37:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:49.511 22:37:50 -- nvmf/common.sh@116 -- # sync 00:17:49.511 22:37:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:49.511 22:37:50 -- nvmf/common.sh@119 -- # set +e 00:17:49.511 22:37:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:49.511 22:37:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:49.511 rmmod nvme_tcp 00:17:49.511 rmmod nvme_fabrics 00:17:49.511 rmmod nvme_keyring 00:17:49.771 22:37:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:49.771 22:37:50 -- nvmf/common.sh@123 -- # set -e 00:17:49.771 22:37:50 -- nvmf/common.sh@124 -- # return 0 00:17:49.771 22:37:50 -- nvmf/common.sh@477 -- # '[' -n 89856 ']' 00:17:49.771 22:37:50 -- nvmf/common.sh@478 -- # killprocess 89856 00:17:49.771 22:37:50 -- common/autotest_common.sh@936 -- # '[' -z 89856 ']' 00:17:49.771 22:37:50 -- common/autotest_common.sh@940 -- # kill -0 89856 00:17:49.771 22:37:50 -- common/autotest_common.sh@941 -- # uname 00:17:49.771 22:37:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:49.771 22:37:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89856 00:17:49.771 killing process with pid 89856 00:17:49.771 22:37:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:49.771 22:37:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:49.771 22:37:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89856' 00:17:49.771 22:37:50 -- common/autotest_common.sh@955 -- # kill 89856 00:17:49.771 22:37:50 -- common/autotest_common.sh@960 -- # wait 89856 00:17:50.031 22:37:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:50.031 22:37:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:50.031 22:37:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:50.031 22:37:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:50.031 22:37:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:50.031 22:37:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.031 22:37:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.031 22:37:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.031 22:37:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:50.031 22:37:50 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:50.031 00:17:50.031 real 0m14.413s 00:17:50.031 user 0m18.141s 00:17:50.031 sys 0m6.593s 00:17:50.031 22:37:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:50.031 ************************************ 00:17:50.031 END TEST nvmf_fips 00:17:50.031 ************************************ 00:17:50.031 22:37:50 -- common/autotest_common.sh@10 -- # set +x 00:17:50.031 22:37:50 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:17:50.031 22:37:50 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:50.031 22:37:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:50.031 22:37:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:50.031 22:37:50 -- common/autotest_common.sh@10 -- # set +x 00:17:50.031 ************************************ 00:17:50.031 START TEST nvmf_fuzz 00:17:50.031 ************************************ 00:17:50.031 22:37:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:50.031 * Looking for test storage... 00:17:50.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:50.031 22:37:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:50.031 22:37:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:50.031 22:37:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:50.291 22:37:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:50.291 22:37:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:50.291 22:37:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:50.291 22:37:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:50.291 22:37:50 -- scripts/common.sh@335 -- # IFS=.-: 00:17:50.291 22:37:50 -- scripts/common.sh@335 -- # read -ra ver1 00:17:50.291 22:37:50 -- scripts/common.sh@336 -- # IFS=.-: 00:17:50.291 22:37:50 -- scripts/common.sh@336 -- # read -ra ver2 00:17:50.291 22:37:50 -- scripts/common.sh@337 -- # local 'op=<' 00:17:50.291 22:37:50 -- scripts/common.sh@339 -- # ver1_l=2 00:17:50.291 22:37:50 -- scripts/common.sh@340 -- # ver2_l=1 00:17:50.291 22:37:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:50.291 22:37:50 -- scripts/common.sh@343 -- # case "$op" in 00:17:50.291 22:37:50 -- scripts/common.sh@344 -- # : 1 00:17:50.291 22:37:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:50.291 22:37:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.291 22:37:50 -- scripts/common.sh@364 -- # decimal 1 00:17:50.291 22:37:50 -- scripts/common.sh@352 -- # local d=1 00:17:50.291 22:37:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:50.291 22:37:50 -- scripts/common.sh@354 -- # echo 1 00:17:50.291 22:37:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:50.291 22:37:50 -- scripts/common.sh@365 -- # decimal 2 00:17:50.291 22:37:50 -- scripts/common.sh@352 -- # local d=2 00:17:50.291 22:37:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:50.291 22:37:50 -- scripts/common.sh@354 -- # echo 2 00:17:50.291 22:37:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:50.291 22:37:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:50.291 22:37:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:50.291 22:37:50 -- scripts/common.sh@367 -- # return 0 00:17:50.291 22:37:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:50.291 22:37:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:50.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.291 --rc genhtml_branch_coverage=1 00:17:50.291 --rc genhtml_function_coverage=1 00:17:50.291 --rc genhtml_legend=1 00:17:50.291 --rc geninfo_all_blocks=1 00:17:50.291 --rc geninfo_unexecuted_blocks=1 00:17:50.291 00:17:50.291 ' 00:17:50.291 22:37:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:50.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.291 --rc genhtml_branch_coverage=1 00:17:50.291 --rc genhtml_function_coverage=1 00:17:50.291 --rc genhtml_legend=1 00:17:50.291 --rc geninfo_all_blocks=1 00:17:50.291 --rc geninfo_unexecuted_blocks=1 00:17:50.291 00:17:50.291 ' 00:17:50.291 22:37:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:50.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.291 --rc genhtml_branch_coverage=1 00:17:50.291 --rc genhtml_function_coverage=1 00:17:50.291 --rc genhtml_legend=1 00:17:50.291 --rc geninfo_all_blocks=1 00:17:50.291 --rc geninfo_unexecuted_blocks=1 00:17:50.291 00:17:50.291 ' 00:17:50.291 22:37:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:50.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.291 --rc genhtml_branch_coverage=1 00:17:50.291 --rc genhtml_function_coverage=1 00:17:50.291 --rc genhtml_legend=1 00:17:50.291 --rc geninfo_all_blocks=1 00:17:50.291 --rc geninfo_unexecuted_blocks=1 00:17:50.291 00:17:50.291 ' 00:17:50.291 22:37:50 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:50.291 22:37:50 -- nvmf/common.sh@7 -- # uname -s 00:17:50.291 22:37:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.291 22:37:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.291 22:37:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.291 22:37:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.291 22:37:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.291 22:37:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.291 22:37:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.291 22:37:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.291 22:37:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.291 22:37:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.291 22:37:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:17:50.291 22:37:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:17:50.291 22:37:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.291 22:37:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.291 22:37:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:50.292 22:37:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:50.292 22:37:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.292 22:37:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.292 22:37:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.292 22:37:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.292 22:37:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.292 22:37:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.292 22:37:50 -- paths/export.sh@5 -- # export PATH 00:17:50.292 22:37:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.292 22:37:50 -- nvmf/common.sh@46 -- # : 0 00:17:50.292 22:37:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:50.292 22:37:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:50.292 22:37:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:50.292 22:37:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.292 22:37:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.292 22:37:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:50.292 22:37:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:50.292 22:37:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:50.292 22:37:50 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:17:50.292 22:37:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:50.292 22:37:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.292 22:37:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:50.292 22:37:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:50.292 22:37:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:50.292 22:37:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.292 22:37:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.292 22:37:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.292 22:37:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:50.292 22:37:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:50.292 22:37:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:50.292 22:37:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:50.292 22:37:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:50.292 22:37:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:50.292 22:37:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.292 22:37:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.292 22:37:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:50.292 22:37:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:50.292 22:37:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:50.292 22:37:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:50.292 22:37:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:50.292 22:37:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.292 22:37:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:50.292 22:37:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:50.292 22:37:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:50.292 22:37:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:50.292 22:37:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:50.292 22:37:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:50.292 Cannot find device "nvmf_tgt_br" 00:17:50.292 22:37:50 -- nvmf/common.sh@154 -- # true 00:17:50.292 22:37:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:50.292 Cannot find device "nvmf_tgt_br2" 00:17:50.292 22:37:50 -- nvmf/common.sh@155 -- # true 00:17:50.292 22:37:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:50.292 22:37:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:50.292 Cannot find device "nvmf_tgt_br" 00:17:50.292 22:37:50 -- nvmf/common.sh@157 -- # true 00:17:50.292 22:37:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:50.292 Cannot find device "nvmf_tgt_br2" 00:17:50.292 22:37:50 -- nvmf/common.sh@158 -- # true 00:17:50.292 22:37:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:50.292 22:37:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:50.292 22:37:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:50.292 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:50.551 22:37:51 -- nvmf/common.sh@161 -- # true 00:17:50.551 22:37:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:50.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:50.551 22:37:51 -- nvmf/common.sh@162 -- # true 00:17:50.551 22:37:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:50.551 22:37:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:50.551 22:37:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:50.551 22:37:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:50.551 22:37:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:50.551 22:37:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:50.551 22:37:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:50.551 22:37:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:50.551 22:37:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:50.551 22:37:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:50.551 22:37:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:50.551 22:37:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:50.551 22:37:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:50.551 22:37:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:50.551 22:37:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:50.551 22:37:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:50.551 22:37:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:50.551 22:37:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:50.551 22:37:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:50.551 22:37:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:50.551 22:37:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:50.551 22:37:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:50.551 22:37:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:50.551 22:37:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:50.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:17:50.551 00:17:50.551 --- 10.0.0.2 ping statistics --- 00:17:50.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.551 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:17:50.551 22:37:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:50.551 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:50.551 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:17:50.551 00:17:50.551 --- 10.0.0.3 ping statistics --- 00:17:50.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.551 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:50.551 22:37:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:50.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:50.551 00:17:50.551 --- 10.0.0.1 ping statistics --- 00:17:50.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.551 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:50.551 22:37:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.551 22:37:51 -- nvmf/common.sh@421 -- # return 0 00:17:50.551 22:37:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:50.551 22:37:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.551 22:37:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:50.551 22:37:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:50.551 22:37:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.551 22:37:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:50.551 22:37:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:50.551 22:37:51 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=90265 00:17:50.551 22:37:51 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:50.551 22:37:51 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:50.551 22:37:51 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 90265 00:17:50.551 22:37:51 -- common/autotest_common.sh@829 -- # '[' -z 90265 ']' 00:17:50.551 22:37:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.551 22:37:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:50.551 22:37:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.551 22:37:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:50.551 22:37:51 -- common/autotest_common.sh@10 -- # set +x 00:17:51.962 22:37:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:51.962 22:37:52 -- common/autotest_common.sh@862 -- # return 0 00:17:51.962 22:37:52 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:51.962 22:37:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.962 22:37:52 -- common/autotest_common.sh@10 -- # set +x 00:17:51.962 22:37:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.962 22:37:52 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:17:51.962 22:37:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.962 22:37:52 -- common/autotest_common.sh@10 -- # set +x 00:17:51.962 Malloc0 00:17:51.962 22:37:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.962 22:37:52 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:51.962 22:37:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.962 22:37:52 -- common/autotest_common.sh@10 -- # set +x 00:17:51.962 22:37:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.962 22:37:52 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:51.962 22:37:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.962 22:37:52 -- common/autotest_common.sh@10 -- # set +x 00:17:51.962 22:37:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.962 22:37:52 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.962 22:37:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.962 22:37:52 -- common/autotest_common.sh@10 -- # set +x 00:17:51.962 22:37:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.962 22:37:52 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:17:51.962 22:37:52 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:17:52.221 Shutting down the fuzz application 00:17:52.221 22:37:52 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:17:52.479 Shutting down the fuzz application 00:17:52.479 22:37:53 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:52.479 22:37:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.479 22:37:53 -- common/autotest_common.sh@10 -- # set +x 00:17:52.479 22:37:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.479 22:37:53 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:17:52.479 22:37:53 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:17:52.479 22:37:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:52.479 22:37:53 -- nvmf/common.sh@116 -- # sync 00:17:52.479 22:37:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:52.479 22:37:53 -- nvmf/common.sh@119 -- # set +e 00:17:52.479 22:37:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:52.479 22:37:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:52.479 rmmod nvme_tcp 00:17:52.479 rmmod nvme_fabrics 00:17:52.479 rmmod nvme_keyring 00:17:52.479 22:37:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:52.479 22:37:53 -- nvmf/common.sh@123 -- # set -e 00:17:52.479 22:37:53 -- nvmf/common.sh@124 -- # return 0 00:17:52.479 22:37:53 -- nvmf/common.sh@477 -- # '[' -n 90265 ']' 00:17:52.479 22:37:53 -- nvmf/common.sh@478 -- # killprocess 90265 00:17:52.479 22:37:53 -- common/autotest_common.sh@936 -- # '[' -z 90265 ']' 00:17:52.479 22:37:53 -- common/autotest_common.sh@940 -- # kill -0 90265 00:17:52.479 22:37:53 -- common/autotest_common.sh@941 -- # uname 00:17:52.479 22:37:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:52.479 22:37:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90265 00:17:52.738 22:37:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:52.738 22:37:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:52.738 killing process with pid 90265 00:17:52.738 22:37:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90265' 00:17:52.738 22:37:53 -- common/autotest_common.sh@955 -- # kill 90265 00:17:52.738 22:37:53 -- common/autotest_common.sh@960 -- # wait 90265 00:17:52.996 22:37:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:52.996 22:37:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:52.996 22:37:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:52.996 22:37:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:52.996 22:37:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:52.996 22:37:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.996 22:37:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:52.996 22:37:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.996 22:37:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:52.996 22:37:53 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:17:52.996 00:17:52.996 real 0m2.878s 00:17:52.996 user 0m2.869s 00:17:52.996 sys 0m0.799s 00:17:52.996 22:37:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:52.996 22:37:53 -- common/autotest_common.sh@10 -- # set +x 00:17:52.996 ************************************ 00:17:52.996 END TEST nvmf_fuzz 00:17:52.996 ************************************ 00:17:52.997 22:37:53 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:52.997 22:37:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:52.997 22:37:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:52.997 22:37:53 -- common/autotest_common.sh@10 -- # set +x 00:17:52.997 ************************************ 00:17:52.997 START TEST nvmf_multiconnection 00:17:52.997 ************************************ 00:17:52.997 22:37:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:52.997 * Looking for test storage... 00:17:52.997 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:52.997 22:37:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:52.997 22:37:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:52.997 22:37:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:53.256 22:37:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:53.256 22:37:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:53.256 22:37:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:53.256 22:37:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:53.256 22:37:53 -- scripts/common.sh@335 -- # IFS=.-: 00:17:53.256 22:37:53 -- scripts/common.sh@335 -- # read -ra ver1 00:17:53.256 22:37:53 -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.256 22:37:53 -- scripts/common.sh@336 -- # read -ra ver2 00:17:53.256 22:37:53 -- scripts/common.sh@337 -- # local 'op=<' 00:17:53.256 22:37:53 -- scripts/common.sh@339 -- # ver1_l=2 00:17:53.256 22:37:53 -- scripts/common.sh@340 -- # ver2_l=1 00:17:53.256 22:37:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:53.256 22:37:53 -- scripts/common.sh@343 -- # case "$op" in 00:17:53.256 22:37:53 -- scripts/common.sh@344 -- # : 1 00:17:53.256 22:37:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:53.256 22:37:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.256 22:37:53 -- scripts/common.sh@364 -- # decimal 1 00:17:53.256 22:37:53 -- scripts/common.sh@352 -- # local d=1 00:17:53.256 22:37:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:53.256 22:37:53 -- scripts/common.sh@354 -- # echo 1 00:17:53.256 22:37:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:53.256 22:37:53 -- scripts/common.sh@365 -- # decimal 2 00:17:53.256 22:37:53 -- scripts/common.sh@352 -- # local d=2 00:17:53.256 22:37:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:53.256 22:37:53 -- scripts/common.sh@354 -- # echo 2 00:17:53.256 22:37:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:53.256 22:37:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:53.256 22:37:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:53.256 22:37:53 -- scripts/common.sh@367 -- # return 0 00:17:53.256 22:37:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:53.256 22:37:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:53.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.256 --rc genhtml_branch_coverage=1 00:17:53.256 --rc genhtml_function_coverage=1 00:17:53.256 --rc genhtml_legend=1 00:17:53.256 --rc geninfo_all_blocks=1 00:17:53.256 --rc geninfo_unexecuted_blocks=1 00:17:53.256 00:17:53.256 ' 00:17:53.256 22:37:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:53.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.256 --rc genhtml_branch_coverage=1 00:17:53.256 --rc genhtml_function_coverage=1 00:17:53.256 --rc genhtml_legend=1 00:17:53.256 --rc geninfo_all_blocks=1 00:17:53.256 --rc geninfo_unexecuted_blocks=1 00:17:53.256 00:17:53.256 ' 00:17:53.256 22:37:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:53.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.256 --rc genhtml_branch_coverage=1 00:17:53.256 --rc genhtml_function_coverage=1 00:17:53.256 --rc genhtml_legend=1 00:17:53.256 --rc geninfo_all_blocks=1 00:17:53.256 --rc geninfo_unexecuted_blocks=1 00:17:53.256 00:17:53.256 ' 00:17:53.256 22:37:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:53.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.256 --rc genhtml_branch_coverage=1 00:17:53.256 --rc genhtml_function_coverage=1 00:17:53.256 --rc genhtml_legend=1 00:17:53.256 --rc geninfo_all_blocks=1 00:17:53.256 --rc geninfo_unexecuted_blocks=1 00:17:53.256 00:17:53.256 ' 00:17:53.256 22:37:53 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:53.256 22:37:53 -- nvmf/common.sh@7 -- # uname -s 00:17:53.256 22:37:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.256 22:37:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.256 22:37:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.256 22:37:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.256 22:37:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.256 22:37:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.256 22:37:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.256 22:37:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.256 22:37:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.256 22:37:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.256 22:37:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:17:53.256 22:37:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:17:53.256 22:37:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.256 22:37:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.256 22:37:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:53.256 22:37:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:53.256 22:37:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.256 22:37:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.256 22:37:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.256 22:37:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.256 22:37:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.256 22:37:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.256 22:37:53 -- paths/export.sh@5 -- # export PATH 00:17:53.256 22:37:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.256 22:37:53 -- nvmf/common.sh@46 -- # : 0 00:17:53.256 22:37:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:53.256 22:37:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:53.256 22:37:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:53.256 22:37:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.256 22:37:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.256 22:37:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:53.256 22:37:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:53.256 22:37:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:53.256 22:37:53 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:53.256 22:37:53 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:53.256 22:37:53 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:17:53.256 22:37:53 -- target/multiconnection.sh@16 -- # nvmftestinit 00:17:53.256 22:37:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:53.256 22:37:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.257 22:37:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:53.257 22:37:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:53.257 22:37:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:53.257 22:37:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.257 22:37:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.257 22:37:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.257 22:37:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:53.257 22:37:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:53.257 22:37:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:53.257 22:37:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:53.257 22:37:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:53.257 22:37:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:53.257 22:37:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:53.257 22:37:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:53.257 22:37:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:53.257 22:37:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:53.257 22:37:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:53.257 22:37:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:53.257 22:37:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:53.257 22:37:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:53.257 22:37:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:53.257 22:37:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:53.257 22:37:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:53.257 22:37:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:53.257 22:37:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:53.257 22:37:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:53.257 Cannot find device "nvmf_tgt_br" 00:17:53.257 22:37:53 -- nvmf/common.sh@154 -- # true 00:17:53.257 22:37:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:53.257 Cannot find device "nvmf_tgt_br2" 00:17:53.257 22:37:53 -- nvmf/common.sh@155 -- # true 00:17:53.257 22:37:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:53.257 22:37:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:53.257 Cannot find device "nvmf_tgt_br" 00:17:53.257 22:37:53 -- nvmf/common.sh@157 -- # true 00:17:53.257 22:37:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:53.257 Cannot find device "nvmf_tgt_br2" 00:17:53.257 22:37:53 -- nvmf/common.sh@158 -- # true 00:17:53.257 22:37:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:53.257 22:37:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:53.257 22:37:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:53.257 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:53.257 22:37:53 -- nvmf/common.sh@161 -- # true 00:17:53.257 22:37:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:53.257 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:53.257 22:37:53 -- nvmf/common.sh@162 -- # true 00:17:53.257 22:37:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:53.257 22:37:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:53.257 22:37:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:53.257 22:37:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:53.257 22:37:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:53.257 22:37:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:53.516 22:37:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:53.516 22:37:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:53.516 22:37:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:53.516 22:37:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:53.516 22:37:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:53.516 22:37:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:53.516 22:37:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:53.516 22:37:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:53.516 22:37:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:53.516 22:37:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:53.516 22:37:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:53.516 22:37:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:53.516 22:37:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:53.516 22:37:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:53.516 22:37:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:53.516 22:37:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:53.516 22:37:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:53.516 22:37:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:53.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:53.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:17:53.516 00:17:53.516 --- 10.0.0.2 ping statistics --- 00:17:53.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.516 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:17:53.516 22:37:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:53.516 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:53.516 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:17:53.516 00:17:53.516 --- 10.0.0.3 ping statistics --- 00:17:53.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.516 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:17:53.516 22:37:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:53.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:53.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:53.516 00:17:53.516 --- 10.0.0.1 ping statistics --- 00:17:53.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.516 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:53.516 22:37:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:53.516 22:37:54 -- nvmf/common.sh@421 -- # return 0 00:17:53.516 22:37:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:53.516 22:37:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:53.516 22:37:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:53.516 22:37:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:53.516 22:37:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:53.516 22:37:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:53.516 22:37:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:53.516 22:37:54 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:17:53.516 22:37:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:53.516 22:37:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:53.516 22:37:54 -- common/autotest_common.sh@10 -- # set +x 00:17:53.516 22:37:54 -- nvmf/common.sh@469 -- # nvmfpid=90480 00:17:53.516 22:37:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:53.516 22:37:54 -- nvmf/common.sh@470 -- # waitforlisten 90480 00:17:53.516 22:37:54 -- common/autotest_common.sh@829 -- # '[' -z 90480 ']' 00:17:53.516 22:37:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.516 22:37:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.516 22:37:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.516 22:37:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.516 22:37:54 -- common/autotest_common.sh@10 -- # set +x 00:17:53.516 [2024-11-20 22:37:54.229580] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:53.516 [2024-11-20 22:37:54.229669] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.775 [2024-11-20 22:37:54.372689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:53.775 [2024-11-20 22:37:54.470128] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:53.775 [2024-11-20 22:37:54.470333] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.775 [2024-11-20 22:37:54.470351] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.775 [2024-11-20 22:37:54.470364] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.775 [2024-11-20 22:37:54.470452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.775 [2024-11-20 22:37:54.471309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.775 [2024-11-20 22:37:54.471312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:53.775 [2024-11-20 22:37:54.471316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.710 22:37:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.710 22:37:55 -- common/autotest_common.sh@862 -- # return 0 00:17:54.710 22:37:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:54.710 22:37:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:54.710 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.710 22:37:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.710 22:37:55 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:54.710 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.710 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.710 [2024-11-20 22:37:55.312359] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.710 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.710 22:37:55 -- target/multiconnection.sh@21 -- # seq 1 11 00:17:54.710 22:37:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:54.710 22:37:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:54.710 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.710 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.710 Malloc1 00:17:54.710 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.710 22:37:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:17:54.710 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.710 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.710 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.710 22:37:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:54.710 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.710 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.710 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.710 22:37:55 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:54.710 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.710 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.710 [2024-11-20 22:37:55.392819] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.710 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.710 22:37:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:54.710 22:37:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:17:54.710 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.710 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.710 Malloc2 00:17:54.710 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.710 22:37:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:54.710 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.710 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.710 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.710 22:37:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:17:54.710 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.710 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.969 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.969 22:37:55 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:54.969 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.969 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.969 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.969 22:37:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:54.969 22:37:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:17:54.969 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.969 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.969 Malloc3 00:17:54.969 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.969 22:37:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:17:54.969 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.969 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.969 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.969 22:37:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:17:54.969 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.969 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.969 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.969 22:37:55 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:17:54.969 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.969 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.969 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.969 22:37:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:54.969 22:37:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:17:54.969 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.969 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.969 Malloc4 00:17:54.969 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.969 22:37:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:17:54.969 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.969 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.969 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.969 22:37:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:17:54.969 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.969 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.969 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.969 22:37:55 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:17:54.969 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.969 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.969 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.969 22:37:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:54.969 22:37:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:17:54.969 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.969 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.969 Malloc5 00:17:54.969 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.969 22:37:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:17:54.969 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.969 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.969 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.969 22:37:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:17:54.969 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.969 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.969 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.969 22:37:55 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:17:54.969 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.969 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.969 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.969 22:37:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:54.969 22:37:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:17:54.969 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.969 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.969 Malloc6 00:17:54.969 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.969 22:37:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:17:54.969 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.969 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.969 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.969 22:37:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:17:54.969 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.969 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.969 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.969 22:37:55 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:17:54.969 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.969 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.969 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.969 22:37:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:54.969 22:37:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:17:54.969 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.969 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:54.969 Malloc7 00:17:54.969 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.969 22:37:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:17:54.969 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.969 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:55.228 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.228 22:37:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:17:55.228 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.228 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:55.228 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.228 22:37:55 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:17:55.228 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.228 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:55.228 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.228 22:37:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:55.228 22:37:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:17:55.228 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.228 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:55.228 Malloc8 00:17:55.228 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.228 22:37:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:17:55.228 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.228 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:55.228 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.228 22:37:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:17:55.228 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.228 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:55.228 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.228 22:37:55 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:17:55.228 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.228 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:55.228 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.228 22:37:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:55.228 22:37:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:17:55.228 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.228 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:55.228 Malloc9 00:17:55.228 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.228 22:37:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:17:55.228 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.228 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:55.228 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.228 22:37:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:17:55.228 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.228 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:55.229 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.229 22:37:55 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:17:55.229 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.229 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:55.229 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.229 22:37:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:55.229 22:37:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:17:55.229 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.229 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:55.229 Malloc10 00:17:55.229 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.229 22:37:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:17:55.229 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.229 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:55.229 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.229 22:37:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:17:55.229 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.229 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:55.229 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.229 22:37:55 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:17:55.229 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.229 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:55.229 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.229 22:37:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:55.229 22:37:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:17:55.229 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.229 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:55.229 Malloc11 00:17:55.229 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.229 22:37:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:17:55.229 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.229 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:55.229 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.229 22:37:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:17:55.229 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.229 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:55.229 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.229 22:37:55 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:17:55.229 22:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.229 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:17:55.229 22:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.229 22:37:55 -- target/multiconnection.sh@28 -- # seq 1 11 00:17:55.229 22:37:55 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:55.229 22:37:55 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:55.488 22:37:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:17:55.488 22:37:56 -- common/autotest_common.sh@1187 -- # local i=0 00:17:55.488 22:37:56 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:55.488 22:37:56 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:17:55.488 22:37:56 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:58.020 22:37:58 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:58.020 22:37:58 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:58.020 22:37:58 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:17:58.020 22:37:58 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:17:58.020 22:37:58 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:58.020 22:37:58 -- common/autotest_common.sh@1197 -- # return 0 00:17:58.020 22:37:58 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.021 22:37:58 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:17:58.021 22:37:58 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:17:58.021 22:37:58 -- common/autotest_common.sh@1187 -- # local i=0 00:17:58.021 22:37:58 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:58.021 22:37:58 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:17:58.021 22:37:58 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:59.924 22:38:00 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:59.924 22:38:00 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:59.924 22:38:00 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:17:59.924 22:38:00 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:17:59.924 22:38:00 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:59.924 22:38:00 -- common/autotest_common.sh@1197 -- # return 0 00:17:59.924 22:38:00 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:59.924 22:38:00 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:17:59.924 22:38:00 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:17:59.924 22:38:00 -- common/autotest_common.sh@1187 -- # local i=0 00:17:59.924 22:38:00 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:59.924 22:38:00 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:17:59.924 22:38:00 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:01.826 22:38:02 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:01.826 22:38:02 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:01.826 22:38:02 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:18:01.826 22:38:02 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:01.826 22:38:02 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:01.826 22:38:02 -- common/autotest_common.sh@1197 -- # return 0 00:18:01.826 22:38:02 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:01.826 22:38:02 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:02.085 22:38:02 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:02.085 22:38:02 -- common/autotest_common.sh@1187 -- # local i=0 00:18:02.085 22:38:02 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:02.085 22:38:02 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:02.085 22:38:02 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:04.617 22:38:04 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:04.617 22:38:04 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:04.617 22:38:04 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:18:04.617 22:38:04 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:04.617 22:38:04 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:04.617 22:38:04 -- common/autotest_common.sh@1197 -- # return 0 00:18:04.617 22:38:04 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:04.617 22:38:04 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:04.617 22:38:04 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:04.617 22:38:04 -- common/autotest_common.sh@1187 -- # local i=0 00:18:04.617 22:38:04 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:04.617 22:38:04 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:04.617 22:38:04 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:06.522 22:38:06 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:06.522 22:38:06 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:06.522 22:38:06 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:18:06.522 22:38:06 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:06.522 22:38:06 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:06.522 22:38:06 -- common/autotest_common.sh@1197 -- # return 0 00:18:06.522 22:38:06 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:06.522 22:38:06 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:06.522 22:38:07 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:06.522 22:38:07 -- common/autotest_common.sh@1187 -- # local i=0 00:18:06.522 22:38:07 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:06.522 22:38:07 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:06.522 22:38:07 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:08.424 22:38:09 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:08.424 22:38:09 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:08.424 22:38:09 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:18:08.687 22:38:09 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:08.687 22:38:09 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:08.687 22:38:09 -- common/autotest_common.sh@1197 -- # return 0 00:18:08.687 22:38:09 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:08.687 22:38:09 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:08.687 22:38:09 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:08.687 22:38:09 -- common/autotest_common.sh@1187 -- # local i=0 00:18:08.687 22:38:09 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:08.687 22:38:09 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:08.687 22:38:09 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:11.223 22:38:11 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:11.223 22:38:11 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:11.223 22:38:11 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:18:11.223 22:38:11 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:11.223 22:38:11 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:11.223 22:38:11 -- common/autotest_common.sh@1197 -- # return 0 00:18:11.223 22:38:11 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:11.224 22:38:11 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:11.224 22:38:11 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:11.224 22:38:11 -- common/autotest_common.sh@1187 -- # local i=0 00:18:11.224 22:38:11 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:11.224 22:38:11 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:11.224 22:38:11 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:13.125 22:38:13 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:13.125 22:38:13 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:13.125 22:38:13 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:18:13.125 22:38:13 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:13.125 22:38:13 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:13.125 22:38:13 -- common/autotest_common.sh@1197 -- # return 0 00:18:13.125 22:38:13 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:13.125 22:38:13 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:13.125 22:38:13 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:13.125 22:38:13 -- common/autotest_common.sh@1187 -- # local i=0 00:18:13.125 22:38:13 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:13.125 22:38:13 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:13.125 22:38:13 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:15.029 22:38:15 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:15.029 22:38:15 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:15.029 22:38:15 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:18:15.288 22:38:15 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:15.288 22:38:15 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:15.288 22:38:15 -- common/autotest_common.sh@1197 -- # return 0 00:18:15.288 22:38:15 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:15.288 22:38:15 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:18:15.288 22:38:15 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:15.288 22:38:15 -- common/autotest_common.sh@1187 -- # local i=0 00:18:15.288 22:38:15 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:15.288 22:38:15 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:15.288 22:38:15 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:17.819 22:38:17 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:17.819 22:38:17 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:17.819 22:38:17 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:18:17.819 22:38:17 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:17.819 22:38:17 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:17.819 22:38:17 -- common/autotest_common.sh@1197 -- # return 0 00:18:17.819 22:38:17 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.819 22:38:17 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:18:17.819 22:38:18 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:17.819 22:38:18 -- common/autotest_common.sh@1187 -- # local i=0 00:18:17.819 22:38:18 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:17.819 22:38:18 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:17.819 22:38:18 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:19.722 22:38:20 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:19.722 22:38:20 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:19.722 22:38:20 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:18:19.722 22:38:20 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:19.722 22:38:20 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:19.722 22:38:20 -- common/autotest_common.sh@1197 -- # return 0 00:18:19.722 22:38:20 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:19.722 [global] 00:18:19.722 thread=1 00:18:19.722 invalidate=1 00:18:19.722 rw=read 00:18:19.722 time_based=1 00:18:19.722 runtime=10 00:18:19.722 ioengine=libaio 00:18:19.722 direct=1 00:18:19.722 bs=262144 00:18:19.722 iodepth=64 00:18:19.722 norandommap=1 00:18:19.722 numjobs=1 00:18:19.722 00:18:19.722 [job0] 00:18:19.722 filename=/dev/nvme0n1 00:18:19.722 [job1] 00:18:19.722 filename=/dev/nvme10n1 00:18:19.722 [job2] 00:18:19.722 filename=/dev/nvme1n1 00:18:19.722 [job3] 00:18:19.722 filename=/dev/nvme2n1 00:18:19.722 [job4] 00:18:19.722 filename=/dev/nvme3n1 00:18:19.722 [job5] 00:18:19.722 filename=/dev/nvme4n1 00:18:19.722 [job6] 00:18:19.722 filename=/dev/nvme5n1 00:18:19.722 [job7] 00:18:19.722 filename=/dev/nvme6n1 00:18:19.722 [job8] 00:18:19.722 filename=/dev/nvme7n1 00:18:19.722 [job9] 00:18:19.722 filename=/dev/nvme8n1 00:18:19.722 [job10] 00:18:19.722 filename=/dev/nvme9n1 00:18:19.722 Could not set queue depth (nvme0n1) 00:18:19.722 Could not set queue depth (nvme10n1) 00:18:19.722 Could not set queue depth (nvme1n1) 00:18:19.722 Could not set queue depth (nvme2n1) 00:18:19.722 Could not set queue depth (nvme3n1) 00:18:19.722 Could not set queue depth (nvme4n1) 00:18:19.722 Could not set queue depth (nvme5n1) 00:18:19.722 Could not set queue depth (nvme6n1) 00:18:19.722 Could not set queue depth (nvme7n1) 00:18:19.722 Could not set queue depth (nvme8n1) 00:18:19.722 Could not set queue depth (nvme9n1) 00:18:19.982 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:19.982 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:19.982 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:19.982 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:19.982 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:19.982 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:19.982 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:19.982 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:19.982 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:19.982 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:19.982 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:19.982 fio-3.35 00:18:19.982 Starting 11 threads 00:18:32.200 00:18:32.200 job0: (groupid=0, jobs=1): err= 0: pid=90959: Wed Nov 20 22:38:30 2024 00:18:32.200 read: IOPS=697, BW=174MiB/s (183MB/s)(1759MiB/10094msec) 00:18:32.200 slat (usec): min=16, max=182461, avg=1343.89, stdev=6454.44 00:18:32.201 clat (usec): min=999, max=444495, avg=90283.32, stdev=61258.32 00:18:32.201 lat (usec): min=1325, max=444567, avg=91627.20, stdev=62262.96 00:18:32.201 clat percentiles (msec): 00:18:32.201 | 1.00th=[ 11], 5.00th=[ 44], 10.00th=[ 52], 20.00th=[ 57], 00:18:32.201 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 72], 00:18:32.201 | 70.00th=[ 80], 80.00th=[ 99], 90.00th=[ 197], 95.00th=[ 239], 00:18:32.201 | 99.00th=[ 300], 99.50th=[ 321], 99.90th=[ 368], 99.95th=[ 368], 00:18:32.201 | 99.99th=[ 443] 00:18:32.201 bw ( KiB/s): min=63488, max=266240, per=11.83%, avg=178523.35, stdev=82145.82, samples=20 00:18:32.201 iops : min= 248, max= 1040, avg=697.25, stdev=320.93, samples=20 00:18:32.201 lat (usec) : 1000=0.01% 00:18:32.201 lat (msec) : 4=0.27%, 10=0.53%, 20=0.53%, 50=6.52%, 100=72.59% 00:18:32.201 lat (msec) : 250=15.48%, 500=4.08% 00:18:32.201 cpu : usr=0.26%, sys=2.33%, ctx=1117, majf=0, minf=4097 00:18:32.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:32.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:32.201 issued rwts: total=7037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.201 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:32.201 job1: (groupid=0, jobs=1): err= 0: pid=90960: Wed Nov 20 22:38:30 2024 00:18:32.201 read: IOPS=750, BW=188MiB/s (197MB/s)(1889MiB/10060msec) 00:18:32.201 slat (usec): min=14, max=67565, avg=1217.93, stdev=4786.90 00:18:32.201 clat (usec): min=1765, max=276302, avg=83861.16, stdev=33415.25 00:18:32.201 lat (usec): min=1812, max=276338, avg=85079.09, stdev=33990.49 00:18:32.201 clat percentiles (msec): 00:18:32.201 | 1.00th=[ 5], 5.00th=[ 30], 10.00th=[ 42], 20.00th=[ 58], 00:18:32.201 | 30.00th=[ 68], 40.00th=[ 73], 50.00th=[ 86], 60.00th=[ 95], 00:18:32.201 | 70.00th=[ 103], 80.00th=[ 108], 90.00th=[ 118], 95.00th=[ 133], 00:18:32.201 | 99.00th=[ 167], 99.50th=[ 236], 99.90th=[ 255], 99.95th=[ 275], 00:18:32.201 | 99.99th=[ 275] 00:18:32.201 bw ( KiB/s): min=117760, max=332800, per=12.71%, avg=191714.25, stdev=63726.59, samples=20 00:18:32.201 iops : min= 460, max= 1300, avg=748.80, stdev=248.90, samples=20 00:18:32.201 lat (msec) : 2=0.15%, 4=0.75%, 10=0.49%, 20=1.35%, 50=11.04% 00:18:32.201 lat (msec) : 100=53.02%, 250=33.00%, 500=0.20% 00:18:32.201 cpu : usr=0.20%, sys=2.36%, ctx=1527, majf=0, minf=4097 00:18:32.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:32.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:32.201 issued rwts: total=7554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.201 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:32.201 job2: (groupid=0, jobs=1): err= 0: pid=90961: Wed Nov 20 22:38:30 2024 00:18:32.201 read: IOPS=551, BW=138MiB/s (145MB/s)(1394MiB/10104msec) 00:18:32.201 slat (usec): min=15, max=158109, avg=1651.45, stdev=7040.72 00:18:32.201 clat (msec): min=2, max=406, avg=114.19, stdev=52.09 00:18:32.201 lat (msec): min=2, max=407, avg=115.84, stdev=53.07 00:18:32.201 clat percentiles (msec): 00:18:32.201 | 1.00th=[ 23], 5.00th=[ 58], 10.00th=[ 69], 20.00th=[ 84], 00:18:32.201 | 30.00th=[ 92], 40.00th=[ 96], 50.00th=[ 102], 60.00th=[ 107], 00:18:32.201 | 70.00th=[ 114], 80.00th=[ 130], 90.00th=[ 207], 95.00th=[ 247], 00:18:32.201 | 99.00th=[ 271], 99.50th=[ 284], 99.90th=[ 292], 99.95th=[ 359], 00:18:32.201 | 99.99th=[ 409] 00:18:32.201 bw ( KiB/s): min=54784, max=232960, per=9.35%, avg=141068.55, stdev=46283.38, samples=20 00:18:32.201 iops : min= 214, max= 910, avg=551.00, stdev=180.79, samples=20 00:18:32.201 lat (msec) : 4=0.07%, 10=0.13%, 20=0.72%, 50=2.92%, 100=44.74% 00:18:32.201 lat (msec) : 250=47.89%, 500=3.53% 00:18:32.201 cpu : usr=0.26%, sys=1.97%, ctx=1111, majf=0, minf=4098 00:18:32.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:32.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:32.201 issued rwts: total=5575,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.201 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:32.201 job3: (groupid=0, jobs=1): err= 0: pid=90962: Wed Nov 20 22:38:30 2024 00:18:32.201 read: IOPS=966, BW=242MiB/s (253MB/s)(2430MiB/10052msec) 00:18:32.201 slat (usec): min=17, max=134519, avg=996.79, stdev=4422.79 00:18:32.201 clat (usec): min=1971, max=263323, avg=65097.76, stdev=35507.26 00:18:32.201 lat (msec): min=2, max=263, avg=66.09, stdev=36.17 00:18:32.201 clat percentiles (msec): 00:18:32.201 | 1.00th=[ 16], 5.00th=[ 23], 10.00th=[ 26], 20.00th=[ 30], 00:18:32.201 | 30.00th=[ 34], 40.00th=[ 40], 50.00th=[ 62], 60.00th=[ 85], 00:18:32.201 | 70.00th=[ 95], 80.00th=[ 102], 90.00th=[ 110], 95.00th=[ 117], 00:18:32.201 | 99.00th=[ 133], 99.50th=[ 155], 99.90th=[ 188], 99.95th=[ 188], 00:18:32.201 | 99.99th=[ 264] 00:18:32.201 bw ( KiB/s): min=145920, max=524312, per=16.38%, avg=247127.75, stdev=131862.58, samples=20 00:18:32.201 iops : min= 570, max= 2048, avg=965.30, stdev=515.06, samples=20 00:18:32.201 lat (msec) : 2=0.01%, 4=0.25%, 10=0.40%, 20=1.18%, 50=45.74% 00:18:32.201 lat (msec) : 100=31.20%, 250=21.20%, 500=0.02% 00:18:32.201 cpu : usr=0.25%, sys=2.87%, ctx=1740, majf=0, minf=4097 00:18:32.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:32.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:32.201 issued rwts: total=9718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.201 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:32.201 job4: (groupid=0, jobs=1): err= 0: pid=90963: Wed Nov 20 22:38:30 2024 00:18:32.201 read: IOPS=681, BW=170MiB/s (179MB/s)(1721MiB/10095msec) 00:18:32.201 slat (usec): min=19, max=222729, avg=1443.05, stdev=8127.95 00:18:32.201 clat (msec): min=15, max=363, avg=92.27, stdev=57.16 00:18:32.201 lat (msec): min=15, max=471, avg=93.71, stdev=58.42 00:18:32.201 clat percentiles (msec): 00:18:32.201 | 1.00th=[ 27], 5.00th=[ 47], 10.00th=[ 53], 20.00th=[ 58], 00:18:32.201 | 30.00th=[ 63], 40.00th=[ 66], 50.00th=[ 69], 60.00th=[ 73], 00:18:32.201 | 70.00th=[ 82], 80.00th=[ 112], 90.00th=[ 199], 95.00th=[ 230], 00:18:32.201 | 99.00th=[ 264], 99.50th=[ 271], 99.90th=[ 284], 99.95th=[ 292], 00:18:32.201 | 99.99th=[ 363] 00:18:32.201 bw ( KiB/s): min=64000, max=272384, per=11.57%, avg=174507.45, stdev=79972.20, samples=20 00:18:32.201 iops : min= 250, max= 1064, avg=681.60, stdev=312.44, samples=20 00:18:32.201 lat (msec) : 20=0.65%, 50=6.68%, 100=68.64%, 250=21.10%, 500=2.92% 00:18:32.201 cpu : usr=0.21%, sys=2.09%, ctx=1355, majf=0, minf=4097 00:18:32.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:32.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:32.201 issued rwts: total=6882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.201 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:32.201 job5: (groupid=0, jobs=1): err= 0: pid=90964: Wed Nov 20 22:38:30 2024 00:18:32.201 read: IOPS=450, BW=113MiB/s (118MB/s)(1132MiB/10055msec) 00:18:32.201 slat (usec): min=19, max=172536, avg=2092.55, stdev=10720.23 00:18:32.201 clat (usec): min=430, max=353516, avg=139776.24, stdev=72386.03 00:18:32.201 lat (usec): min=657, max=398619, avg=141868.79, stdev=74163.50 00:18:32.201 clat percentiles (msec): 00:18:32.201 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 57], 20.00th=[ 75], 00:18:32.201 | 30.00th=[ 86], 40.00th=[ 105], 50.00th=[ 132], 60.00th=[ 186], 00:18:32.201 | 70.00th=[ 201], 80.00th=[ 209], 90.00th=[ 224], 95.00th=[ 247], 00:18:32.201 | 99.00th=[ 262], 99.50th=[ 275], 99.90th=[ 305], 99.95th=[ 317], 00:18:32.201 | 99.99th=[ 355] 00:18:32.201 bw ( KiB/s): min=64383, max=248832, per=7.57%, avg=114255.50, stdev=56993.98, samples=20 00:18:32.201 iops : min= 251, max= 972, avg=446.20, stdev=222.60, samples=20 00:18:32.202 lat (usec) : 500=0.02%, 1000=0.04% 00:18:32.202 lat (msec) : 2=0.46%, 4=4.22%, 10=1.79%, 20=0.66%, 50=1.24% 00:18:32.202 lat (msec) : 100=28.95%, 250=58.44%, 500=4.17% 00:18:32.202 cpu : usr=0.17%, sys=1.47%, ctx=907, majf=0, minf=4097 00:18:32.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:32.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:32.202 issued rwts: total=4528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.202 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:32.202 job6: (groupid=0, jobs=1): err= 0: pid=90965: Wed Nov 20 22:38:30 2024 00:18:32.202 read: IOPS=309, BW=77.5MiB/s (81.2MB/s)(783MiB/10105msec) 00:18:32.202 slat (usec): min=16, max=117003, avg=3127.62, stdev=11238.67 00:18:32.202 clat (msec): min=29, max=357, avg=202.95, stdev=38.28 00:18:32.202 lat (msec): min=30, max=358, avg=206.08, stdev=40.29 00:18:32.202 clat percentiles (msec): 00:18:32.202 | 1.00th=[ 89], 5.00th=[ 132], 10.00th=[ 148], 20.00th=[ 184], 00:18:32.202 | 30.00th=[ 194], 40.00th=[ 199], 50.00th=[ 205], 60.00th=[ 209], 00:18:32.202 | 70.00th=[ 218], 80.00th=[ 230], 90.00th=[ 253], 95.00th=[ 266], 00:18:32.202 | 99.00th=[ 284], 99.50th=[ 288], 99.90th=[ 338], 99.95th=[ 355], 00:18:32.202 | 99.99th=[ 359] 00:18:32.202 bw ( KiB/s): min=62464, max=109349, per=5.20%, avg=78512.40, stdev=12068.46, samples=20 00:18:32.202 iops : min= 244, max= 427, avg=306.65, stdev=47.12, samples=20 00:18:32.202 lat (msec) : 50=0.26%, 100=1.25%, 250=87.58%, 500=10.92% 00:18:32.202 cpu : usr=0.18%, sys=1.12%, ctx=652, majf=0, minf=4097 00:18:32.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:32.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:32.202 issued rwts: total=3132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.202 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:32.202 job7: (groupid=0, jobs=1): err= 0: pid=90966: Wed Nov 20 22:38:30 2024 00:18:32.202 read: IOPS=332, BW=83.1MiB/s (87.2MB/s)(839MiB/10091msec) 00:18:32.202 slat (usec): min=21, max=112789, avg=2944.29, stdev=11210.92 00:18:32.202 clat (msec): min=5, max=355, avg=189.26, stdev=56.45 00:18:32.202 lat (msec): min=5, max=378, avg=192.20, stdev=58.28 00:18:32.202 clat percentiles (msec): 00:18:32.202 | 1.00th=[ 12], 5.00th=[ 74], 10.00th=[ 109], 20.00th=[ 178], 00:18:32.202 | 30.00th=[ 190], 40.00th=[ 197], 50.00th=[ 205], 60.00th=[ 209], 00:18:32.202 | 70.00th=[ 213], 80.00th=[ 222], 90.00th=[ 247], 95.00th=[ 257], 00:18:32.202 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 342], 99.95th=[ 342], 00:18:32.202 | 99.99th=[ 355] 00:18:32.202 bw ( KiB/s): min=62464, max=146432, per=5.58%, avg=84237.45, stdev=22717.43, samples=20 00:18:32.202 iops : min= 244, max= 572, avg=328.95, stdev=88.80, samples=20 00:18:32.202 lat (msec) : 10=0.66%, 20=3.64%, 50=0.33%, 100=3.73%, 250=83.01% 00:18:32.202 lat (msec) : 500=8.64% 00:18:32.202 cpu : usr=0.11%, sys=1.47%, ctx=539, majf=0, minf=4097 00:18:32.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:18:32.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:32.202 issued rwts: total=3355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.202 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:32.202 job8: (groupid=0, jobs=1): err= 0: pid=90973: Wed Nov 20 22:38:30 2024 00:18:32.202 read: IOPS=320, BW=80.1MiB/s (84.0MB/s)(809MiB/10104msec) 00:18:32.202 slat (usec): min=21, max=118433, avg=3025.55, stdev=11152.39 00:18:32.202 clat (msec): min=4, max=364, avg=196.45, stdev=51.67 00:18:32.202 lat (msec): min=4, max=387, avg=199.47, stdev=53.52 00:18:32.202 clat percentiles (msec): 00:18:32.202 | 1.00th=[ 11], 5.00th=[ 63], 10.00th=[ 142], 20.00th=[ 184], 00:18:32.202 | 30.00th=[ 194], 40.00th=[ 201], 50.00th=[ 205], 60.00th=[ 211], 00:18:32.202 | 70.00th=[ 215], 80.00th=[ 226], 90.00th=[ 247], 95.00th=[ 262], 00:18:32.202 | 99.00th=[ 284], 99.50th=[ 284], 99.90th=[ 330], 99.95th=[ 363], 00:18:32.202 | 99.99th=[ 363] 00:18:32.202 bw ( KiB/s): min=54784, max=161469, per=5.38%, avg=81195.40, stdev=21569.72, samples=20 00:18:32.202 iops : min= 214, max= 630, avg=317.10, stdev=84.12, samples=20 00:18:32.202 lat (msec) : 10=0.93%, 20=1.33%, 50=1.64%, 100=2.50%, 250=84.80% 00:18:32.202 lat (msec) : 500=8.81% 00:18:32.202 cpu : usr=0.08%, sys=1.25%, ctx=720, majf=0, minf=4097 00:18:32.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:18:32.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:32.202 issued rwts: total=3236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.202 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:32.202 job9: (groupid=0, jobs=1): err= 0: pid=90974: Wed Nov 20 22:38:30 2024 00:18:32.202 read: IOPS=481, BW=120MiB/s (126MB/s)(1212MiB/10057msec) 00:18:32.202 slat (usec): min=16, max=180029, avg=1959.19, stdev=9356.61 00:18:32.202 clat (msec): min=18, max=431, avg=130.65, stdev=72.52 00:18:32.202 lat (msec): min=18, max=431, avg=132.61, stdev=74.16 00:18:32.202 clat percentiles (msec): 00:18:32.202 | 1.00th=[ 29], 5.00th=[ 51], 10.00th=[ 56], 20.00th=[ 64], 00:18:32.202 | 30.00th=[ 68], 40.00th=[ 74], 50.00th=[ 88], 60.00th=[ 188], 00:18:32.202 | 70.00th=[ 201], 80.00th=[ 209], 90.00th=[ 220], 95.00th=[ 232], 00:18:32.202 | 99.00th=[ 253], 99.50th=[ 257], 99.90th=[ 284], 99.95th=[ 288], 00:18:32.202 | 99.99th=[ 430] 00:18:32.202 bw ( KiB/s): min=64512, max=254973, per=8.11%, avg=122421.45, stdev=69787.79, samples=20 00:18:32.202 iops : min= 252, max= 995, avg=478.10, stdev=272.50, samples=20 00:18:32.202 lat (msec) : 20=0.10%, 50=4.60%, 100=49.07%, 250=44.24%, 500=1.98% 00:18:32.202 cpu : usr=0.11%, sys=1.75%, ctx=918, majf=0, minf=4097 00:18:32.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:32.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:32.202 issued rwts: total=4846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.202 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:32.202 job10: (groupid=0, jobs=1): err= 0: pid=90975: Wed Nov 20 22:38:30 2024 00:18:32.202 read: IOPS=366, BW=91.6MiB/s (96.0MB/s)(925MiB/10097msec) 00:18:32.202 slat (usec): min=14, max=113891, avg=2569.01, stdev=9185.25 00:18:32.202 clat (msec): min=8, max=317, avg=171.91, stdev=60.74 00:18:32.202 lat (msec): min=8, max=346, avg=174.48, stdev=62.27 00:18:32.202 clat percentiles (msec): 00:18:32.202 | 1.00th=[ 53], 5.00th=[ 72], 10.00th=[ 86], 20.00th=[ 107], 00:18:32.202 | 30.00th=[ 130], 40.00th=[ 150], 50.00th=[ 194], 60.00th=[ 205], 00:18:32.202 | 70.00th=[ 215], 80.00th=[ 224], 90.00th=[ 245], 95.00th=[ 255], 00:18:32.202 | 99.00th=[ 279], 99.50th=[ 292], 99.90th=[ 317], 99.95th=[ 317], 00:18:32.202 | 99.99th=[ 317] 00:18:32.202 bw ( KiB/s): min=61828, max=190464, per=6.16%, avg=93023.75, stdev=35786.22, samples=20 00:18:32.202 iops : min= 241, max= 744, avg=363.25, stdev=139.86, samples=20 00:18:32.202 lat (msec) : 10=0.27%, 20=0.32%, 50=0.16%, 100=15.85%, 250=75.96% 00:18:32.202 lat (msec) : 500=7.44% 00:18:32.202 cpu : usr=0.15%, sys=1.42%, ctx=762, majf=0, minf=4097 00:18:32.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:32.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:32.202 issued rwts: total=3698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.202 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:32.202 00:18:32.202 Run status group 0 (all jobs): 00:18:32.202 READ: bw=1474MiB/s (1545MB/s), 77.5MiB/s-242MiB/s (81.2MB/s-253MB/s), io=14.5GiB (15.6GB), run=10052-10105msec 00:18:32.202 00:18:32.202 Disk stats (read/write): 00:18:32.202 nvme0n1: ios=13952/0, merge=0/0, ticks=1236944/0, in_queue=1236944, util=97.42% 00:18:32.202 nvme10n1: ios=15005/0, merge=0/0, ticks=1239735/0, in_queue=1239735, util=97.40% 00:18:32.202 nvme1n1: ios=11037/0, merge=0/0, ticks=1237268/0, in_queue=1237268, util=97.86% 00:18:32.202 nvme2n1: ios=19308/0, merge=0/0, ticks=1228112/0, in_queue=1228112, util=97.61% 00:18:32.202 nvme3n1: ios=13636/0, merge=0/0, ticks=1235321/0, in_queue=1235321, util=97.66% 00:18:32.202 nvme4n1: ios=8929/0, merge=0/0, ticks=1245954/0, in_queue=1245954, util=98.28% 00:18:32.202 nvme5n1: ios=6142/0, merge=0/0, ticks=1237369/0, in_queue=1237369, util=98.30% 00:18:32.202 nvme6n1: ios=6582/0, merge=0/0, ticks=1245744/0, in_queue=1245744, util=98.55% 00:18:32.203 nvme7n1: ios=6359/0, merge=0/0, ticks=1237723/0, in_queue=1237723, util=98.76% 00:18:32.203 nvme8n1: ios=9564/0, merge=0/0, ticks=1241007/0, in_queue=1241007, util=98.50% 00:18:32.203 nvme9n1: ios=7268/0, merge=0/0, ticks=1237478/0, in_queue=1237478, util=98.92% 00:18:32.203 22:38:30 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:32.203 [global] 00:18:32.203 thread=1 00:18:32.203 invalidate=1 00:18:32.203 rw=randwrite 00:18:32.203 time_based=1 00:18:32.203 runtime=10 00:18:32.203 ioengine=libaio 00:18:32.203 direct=1 00:18:32.203 bs=262144 00:18:32.203 iodepth=64 00:18:32.203 norandommap=1 00:18:32.203 numjobs=1 00:18:32.203 00:18:32.203 [job0] 00:18:32.203 filename=/dev/nvme0n1 00:18:32.203 [job1] 00:18:32.203 filename=/dev/nvme10n1 00:18:32.203 [job2] 00:18:32.203 filename=/dev/nvme1n1 00:18:32.203 [job3] 00:18:32.203 filename=/dev/nvme2n1 00:18:32.203 [job4] 00:18:32.203 filename=/dev/nvme3n1 00:18:32.203 [job5] 00:18:32.203 filename=/dev/nvme4n1 00:18:32.203 [job6] 00:18:32.203 filename=/dev/nvme5n1 00:18:32.203 [job7] 00:18:32.203 filename=/dev/nvme6n1 00:18:32.203 [job8] 00:18:32.203 filename=/dev/nvme7n1 00:18:32.203 [job9] 00:18:32.203 filename=/dev/nvme8n1 00:18:32.203 [job10] 00:18:32.203 filename=/dev/nvme9n1 00:18:32.203 Could not set queue depth (nvme0n1) 00:18:32.203 Could not set queue depth (nvme10n1) 00:18:32.203 Could not set queue depth (nvme1n1) 00:18:32.203 Could not set queue depth (nvme2n1) 00:18:32.203 Could not set queue depth (nvme3n1) 00:18:32.203 Could not set queue depth (nvme4n1) 00:18:32.203 Could not set queue depth (nvme5n1) 00:18:32.203 Could not set queue depth (nvme6n1) 00:18:32.203 Could not set queue depth (nvme7n1) 00:18:32.203 Could not set queue depth (nvme8n1) 00:18:32.203 Could not set queue depth (nvme9n1) 00:18:32.203 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:32.203 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:32.203 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:32.203 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:32.203 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:32.203 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:32.203 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:32.203 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:32.203 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:32.203 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:32.203 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:32.203 fio-3.35 00:18:32.203 Starting 11 threads 00:18:42.240 00:18:42.240 job0: (groupid=0, jobs=1): err= 0: pid=91169: Wed Nov 20 22:38:41 2024 00:18:42.240 write: IOPS=605, BW=151MiB/s (159MB/s)(1531MiB/10115msec); 0 zone resets 00:18:42.240 slat (usec): min=17, max=9626, avg=1623.12, stdev=2953.85 00:18:42.240 clat (msec): min=10, max=238, avg=104.08, stdev=35.25 00:18:42.240 lat (msec): min=10, max=238, avg=105.70, stdev=35.69 00:18:42.240 clat percentiles (msec): 00:18:42.240 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 52], 00:18:42.240 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 126], 60.00th=[ 127], 00:18:42.240 | 70.00th=[ 128], 80.00th=[ 128], 90.00th=[ 129], 95.00th=[ 130], 00:18:42.240 | 99.00th=[ 134], 99.50th=[ 176], 99.90th=[ 224], 99.95th=[ 230], 00:18:42.240 | 99.99th=[ 239] 00:18:42.240 bw ( KiB/s): min=125700, max=315392, per=13.06%, avg=155084.90, stdev=64782.92, samples=20 00:18:42.240 iops : min= 491, max= 1232, avg=605.75, stdev=253.08, samples=20 00:18:42.240 lat (msec) : 20=0.25%, 50=10.88%, 100=17.89%, 250=70.99% 00:18:42.240 cpu : usr=1.30%, sys=1.52%, ctx=8301, majf=0, minf=1 00:18:42.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:42.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:42.240 issued rwts: total=0,6122,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.240 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:42.240 job1: (groupid=0, jobs=1): err= 0: pid=91170: Wed Nov 20 22:38:41 2024 00:18:42.240 write: IOPS=305, BW=76.4MiB/s (80.1MB/s)(777MiB/10170msec); 0 zone resets 00:18:42.240 slat (usec): min=19, max=124365, avg=3212.82, stdev=6059.77 00:18:42.240 clat (msec): min=25, max=366, avg=206.13, stdev=27.32 00:18:42.240 lat (msec): min=25, max=366, avg=209.34, stdev=27.02 00:18:42.240 clat percentiles (msec): 00:18:42.240 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:18:42.240 | 30.00th=[ 199], 40.00th=[ 201], 50.00th=[ 203], 60.00th=[ 203], 00:18:42.240 | 70.00th=[ 207], 80.00th=[ 211], 90.00th=[ 226], 95.00th=[ 268], 00:18:42.240 | 99.00th=[ 313], 99.50th=[ 326], 99.90th=[ 355], 99.95th=[ 368], 00:18:42.240 | 99.99th=[ 368] 00:18:42.240 bw ( KiB/s): min=50176, max=83968, per=6.56%, avg=77903.80, stdev=7667.57, samples=20 00:18:42.240 iops : min= 196, max= 328, avg=304.25, stdev=29.98, samples=20 00:18:42.240 lat (msec) : 50=0.32%, 100=0.51%, 250=92.44%, 500=6.73% 00:18:42.240 cpu : usr=0.90%, sys=0.99%, ctx=2737, majf=0, minf=1 00:18:42.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:42.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:42.240 issued rwts: total=0,3107,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.240 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:42.240 job2: (groupid=0, jobs=1): err= 0: pid=91182: Wed Nov 20 22:38:41 2024 00:18:42.240 write: IOPS=304, BW=76.2MiB/s (80.0MB/s)(775MiB/10164msec); 0 zone resets 00:18:42.240 slat (usec): min=18, max=99869, avg=3220.99, stdev=5989.64 00:18:42.240 clat (msec): min=102, max=359, avg=206.52, stdev=24.35 00:18:42.240 lat (msec): min=102, max=359, avg=209.74, stdev=23.95 00:18:42.240 clat percentiles (msec): 00:18:42.240 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 194], 00:18:42.240 | 30.00th=[ 199], 40.00th=[ 201], 50.00th=[ 201], 60.00th=[ 205], 00:18:42.240 | 70.00th=[ 207], 80.00th=[ 209], 90.00th=[ 224], 95.00th=[ 275], 00:18:42.240 | 99.00th=[ 288], 99.50th=[ 309], 99.90th=[ 347], 99.95th=[ 359], 00:18:42.240 | 99.99th=[ 359] 00:18:42.240 bw ( KiB/s): min=53248, max=83968, per=6.55%, avg=77747.20, stdev=8070.30, samples=20 00:18:42.240 iops : min= 208, max= 328, avg=303.70, stdev=31.52, samples=20 00:18:42.240 lat (msec) : 250=92.39%, 500=7.61% 00:18:42.240 cpu : usr=0.73%, sys=0.86%, ctx=3452, majf=0, minf=1 00:18:42.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:42.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:42.240 issued rwts: total=0,3100,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.240 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:42.240 job3: (groupid=0, jobs=1): err= 0: pid=91183: Wed Nov 20 22:38:41 2024 00:18:42.240 write: IOPS=529, BW=132MiB/s (139MB/s)(1338MiB/10102msec); 0 zone resets 00:18:42.240 slat (usec): min=19, max=11850, avg=1842.81, stdev=3195.49 00:18:42.240 clat (msec): min=10, max=225, avg=118.91, stdev=17.72 00:18:42.240 lat (msec): min=10, max=225, avg=120.75, stdev=17.77 00:18:42.240 clat percentiles (msec): 00:18:42.240 | 1.00th=[ 44], 5.00th=[ 89], 10.00th=[ 93], 20.00th=[ 118], 00:18:42.240 | 30.00th=[ 121], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 127], 00:18:42.240 | 70.00th=[ 128], 80.00th=[ 128], 90.00th=[ 129], 95.00th=[ 130], 00:18:42.240 | 99.00th=[ 134], 99.50th=[ 169], 99.90th=[ 218], 99.95th=[ 218], 00:18:42.240 | 99.99th=[ 226] 00:18:42.240 bw ( KiB/s): min=125440, max=189440, per=11.40%, avg=135415.60, stdev=17347.47, samples=20 00:18:42.240 iops : min= 490, max= 740, avg=528.95, stdev=67.73, samples=20 00:18:42.240 lat (msec) : 20=0.15%, 50=1.01%, 100=15.98%, 250=82.87% 00:18:42.240 cpu : usr=1.44%, sys=1.49%, ctx=4706, majf=0, minf=1 00:18:42.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:42.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:42.240 issued rwts: total=0,5352,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.240 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:42.240 job4: (groupid=0, jobs=1): err= 0: pid=91184: Wed Nov 20 22:38:41 2024 00:18:42.240 write: IOPS=499, BW=125MiB/s (131MB/s)(1264MiB/10119msec); 0 zone resets 00:18:42.240 slat (usec): min=18, max=10173, avg=1974.70, stdev=3399.95 00:18:42.240 clat (msec): min=3, max=243, avg=126.09, stdev=16.45 00:18:42.240 lat (msec): min=3, max=243, avg=128.06, stdev=16.35 00:18:42.240 clat percentiles (msec): 00:18:42.240 | 1.00th=[ 64], 5.00th=[ 99], 10.00th=[ 102], 20.00th=[ 124], 00:18:42.240 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 131], 60.00th=[ 131], 00:18:42.240 | 70.00th=[ 132], 80.00th=[ 133], 90.00th=[ 136], 95.00th=[ 138], 00:18:42.240 | 99.00th=[ 142], 99.50th=[ 192], 99.90th=[ 228], 99.95th=[ 234], 00:18:42.240 | 99.99th=[ 245] 00:18:42.240 bw ( KiB/s): min=118784, max=168960, per=10.76%, avg=127782.80, stdev=11616.39, samples=20 00:18:42.240 iops : min= 464, max= 660, avg=499.15, stdev=45.38, samples=20 00:18:42.240 lat (msec) : 4=0.08%, 10=0.08%, 20=0.34%, 50=0.32%, 100=7.83% 00:18:42.240 lat (msec) : 250=91.36% 00:18:42.240 cpu : usr=0.81%, sys=1.21%, ctx=5957, majf=0, minf=1 00:18:42.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:42.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:42.240 issued rwts: total=0,5055,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.240 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:42.240 job5: (groupid=0, jobs=1): err= 0: pid=91185: Wed Nov 20 22:38:41 2024 00:18:42.240 write: IOPS=307, BW=76.9MiB/s (80.7MB/s)(782MiB/10168msec); 0 zone resets 00:18:42.240 slat (usec): min=21, max=49348, avg=3190.40, stdev=5738.02 00:18:42.240 clat (msec): min=27, max=379, avg=204.70, stdev=27.07 00:18:42.240 lat (msec): min=27, max=379, avg=207.89, stdev=26.83 00:18:42.240 clat percentiles (msec): 00:18:42.240 | 1.00th=[ 111], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 194], 00:18:42.240 | 30.00th=[ 199], 40.00th=[ 201], 50.00th=[ 201], 60.00th=[ 203], 00:18:42.240 | 70.00th=[ 205], 80.00th=[ 209], 90.00th=[ 224], 95.00th=[ 266], 00:18:42.240 | 99.00th=[ 288], 99.50th=[ 330], 99.90th=[ 368], 99.95th=[ 380], 00:18:42.240 | 99.99th=[ 380] 00:18:42.240 bw ( KiB/s): min=59392, max=83968, per=6.61%, avg=78481.95, stdev=6149.45, samples=20 00:18:42.240 iops : min= 232, max= 328, avg=306.55, stdev=24.03, samples=20 00:18:42.240 lat (msec) : 50=0.26%, 100=0.54%, 250=92.30%, 500=6.90% 00:18:42.240 cpu : usr=0.71%, sys=0.98%, ctx=2446, majf=0, minf=1 00:18:42.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:42.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:42.240 issued rwts: total=0,3129,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.240 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:42.240 job6: (groupid=0, jobs=1): err= 0: pid=91186: Wed Nov 20 22:38:41 2024 00:18:42.240 write: IOPS=525, BW=131MiB/s (138MB/s)(1328MiB/10115msec); 0 zone resets 00:18:42.240 slat (usec): min=26, max=11145, avg=1877.03, stdev=3209.90 00:18:42.240 clat (msec): min=10, max=238, avg=119.96, stdev=16.61 00:18:42.240 lat (msec): min=10, max=238, avg=121.84, stdev=16.55 00:18:42.240 clat percentiles (msec): 00:18:42.240 | 1.00th=[ 75], 5.00th=[ 91], 10.00th=[ 94], 20.00th=[ 118], 00:18:42.240 | 30.00th=[ 121], 40.00th=[ 126], 50.00th=[ 127], 60.00th=[ 127], 00:18:42.240 | 70.00th=[ 128], 80.00th=[ 129], 90.00th=[ 129], 95.00th=[ 130], 00:18:42.240 | 99.00th=[ 136], 99.50th=[ 184], 99.90th=[ 232], 99.95th=[ 232], 00:18:42.240 | 99.99th=[ 239] 00:18:42.241 bw ( KiB/s): min=125700, max=174080, per=11.31%, avg=134323.30, stdev=14211.24, samples=20 00:18:42.241 iops : min= 491, max= 680, avg=524.65, stdev=55.53, samples=20 00:18:42.241 lat (msec) : 20=0.30%, 50=0.38%, 100=14.97%, 250=84.35% 00:18:42.241 cpu : usr=1.67%, sys=1.53%, ctx=4026, majf=0, minf=1 00:18:42.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:42.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:42.241 issued rwts: total=0,5311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.241 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:42.241 job7: (groupid=0, jobs=1): err= 0: pid=91187: Wed Nov 20 22:38:41 2024 00:18:42.241 write: IOPS=307, BW=76.9MiB/s (80.7MB/s)(783MiB/10172msec); 0 zone resets 00:18:42.241 slat (usec): min=20, max=65229, avg=3191.28, stdev=5711.47 00:18:42.241 clat (msec): min=29, max=358, avg=204.69, stdev=24.71 00:18:42.241 lat (msec): min=29, max=358, avg=207.88, stdev=24.44 00:18:42.241 clat percentiles (msec): 00:18:42.241 | 1.00th=[ 101], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 197], 00:18:42.241 | 30.00th=[ 199], 40.00th=[ 201], 50.00th=[ 203], 60.00th=[ 205], 00:18:42.241 | 70.00th=[ 207], 80.00th=[ 211], 90.00th=[ 224], 95.00th=[ 253], 00:18:42.241 | 99.00th=[ 271], 99.50th=[ 309], 99.90th=[ 347], 99.95th=[ 359], 00:18:42.241 | 99.99th=[ 359] 00:18:42.241 bw ( KiB/s): min=65536, max=83968, per=6.61%, avg=78500.40, stdev=4721.38, samples=20 00:18:42.241 iops : min= 256, max= 328, avg=306.60, stdev=18.49, samples=20 00:18:42.241 lat (msec) : 50=0.26%, 100=0.64%, 250=93.71%, 500=5.40% 00:18:42.241 cpu : usr=0.87%, sys=1.12%, ctx=4243, majf=0, minf=1 00:18:42.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:42.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:42.241 issued rwts: total=0,3130,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.241 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:42.241 job8: (groupid=0, jobs=1): err= 0: pid=91188: Wed Nov 20 22:38:41 2024 00:18:42.241 write: IOPS=466, BW=117MiB/s (122MB/s)(1181MiB/10112msec); 0 zone resets 00:18:42.241 slat (usec): min=18, max=127807, avg=2074.51, stdev=4179.91 00:18:42.241 clat (msec): min=9, max=336, avg=134.88, stdev=31.63 00:18:42.241 lat (msec): min=9, max=336, avg=136.96, stdev=31.85 00:18:42.241 clat percentiles (msec): 00:18:42.241 | 1.00th=[ 58], 5.00th=[ 122], 10.00th=[ 123], 20.00th=[ 126], 00:18:42.241 | 30.00th=[ 129], 40.00th=[ 130], 50.00th=[ 131], 60.00th=[ 132], 00:18:42.241 | 70.00th=[ 133], 80.00th=[ 134], 90.00th=[ 138], 95.00th=[ 182], 00:18:42.241 | 99.00th=[ 284], 99.50th=[ 288], 99.90th=[ 330], 99.95th=[ 338], 00:18:42.241 | 99.99th=[ 338] 00:18:42.241 bw ( KiB/s): min=47104, max=126976, per=10.04%, avg=119270.25, stdev=17735.58, samples=20 00:18:42.241 iops : min= 184, max= 496, avg=465.85, stdev=69.27, samples=20 00:18:42.241 lat (msec) : 10=0.08%, 50=0.64%, 100=1.36%, 250=94.56%, 500=3.37% 00:18:42.241 cpu : usr=0.79%, sys=1.15%, ctx=5923, majf=0, minf=1 00:18:42.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:42.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:42.241 issued rwts: total=0,4722,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.241 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:42.241 job9: (groupid=0, jobs=1): err= 0: pid=91189: Wed Nov 20 22:38:41 2024 00:18:42.241 write: IOPS=307, BW=76.8MiB/s (80.5MB/s)(781MiB/10169msec); 0 zone resets 00:18:42.241 slat (usec): min=22, max=75272, avg=3197.78, stdev=5908.23 00:18:42.241 clat (msec): min=64, max=357, avg=205.05, stdev=25.31 00:18:42.241 lat (msec): min=64, max=357, avg=208.25, stdev=25.02 00:18:42.241 clat percentiles (msec): 00:18:42.241 | 1.00th=[ 153], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 192], 00:18:42.241 | 30.00th=[ 199], 40.00th=[ 201], 50.00th=[ 201], 60.00th=[ 201], 00:18:42.241 | 70.00th=[ 203], 80.00th=[ 209], 90.00th=[ 222], 95.00th=[ 271], 00:18:42.241 | 99.00th=[ 288], 99.50th=[ 309], 99.90th=[ 347], 99.95th=[ 359], 00:18:42.241 | 99.99th=[ 359] 00:18:42.241 bw ( KiB/s): min=57344, max=83968, per=6.60%, avg=78361.60, stdev=7355.10, samples=20 00:18:42.241 iops : min= 224, max= 328, avg=306.10, stdev=28.73, samples=20 00:18:42.241 lat (msec) : 100=0.48%, 250=92.38%, 500=7.14% 00:18:42.241 cpu : usr=0.75%, sys=0.93%, ctx=3995, majf=0, minf=1 00:18:42.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:42.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:42.241 issued rwts: total=0,3124,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.241 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:42.241 job10: (groupid=0, jobs=1): err= 0: pid=91190: Wed Nov 20 22:38:41 2024 00:18:42.241 write: IOPS=498, BW=125MiB/s (131MB/s)(1259MiB/10107msec); 0 zone resets 00:18:42.241 slat (usec): min=19, max=10722, avg=1980.65, stdev=3384.56 00:18:42.241 clat (msec): min=13, max=233, avg=126.43, stdev=14.73 00:18:42.241 lat (msec): min=13, max=233, avg=128.41, stdev=14.58 00:18:42.241 clat percentiles (msec): 00:18:42.241 | 1.00th=[ 85], 5.00th=[ 99], 10.00th=[ 103], 20.00th=[ 124], 00:18:42.241 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 131], 60.00th=[ 131], 00:18:42.241 | 70.00th=[ 132], 80.00th=[ 134], 90.00th=[ 136], 95.00th=[ 138], 00:18:42.241 | 99.00th=[ 142], 99.50th=[ 184], 99.90th=[ 226], 99.95th=[ 226], 00:18:42.241 | 99.99th=[ 234] 00:18:42.241 bw ( KiB/s): min=120832, max=161469, per=10.72%, avg=127267.05, stdev=9961.44, samples=20 00:18:42.241 iops : min= 472, max= 630, avg=497.10, stdev=38.78, samples=20 00:18:42.241 lat (msec) : 20=0.12%, 50=0.44%, 100=7.83%, 250=91.62% 00:18:42.241 cpu : usr=0.94%, sys=1.41%, ctx=8883, majf=0, minf=1 00:18:42.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:18:42.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:42.241 issued rwts: total=0,5035,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.241 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:42.241 00:18:42.241 Run status group 0 (all jobs): 00:18:42.241 WRITE: bw=1160MiB/s (1216MB/s), 76.2MiB/s-151MiB/s (80.0MB/s-159MB/s), io=11.5GiB (12.4GB), run=10102-10172msec 00:18:42.241 00:18:42.241 Disk stats (read/write): 00:18:42.241 nvme0n1: ios=49/12100, merge=0/0, ticks=49/1213449, in_queue=1213498, util=97.77% 00:18:42.241 nvme10n1: ios=49/6076, merge=0/0, ticks=48/1207531, in_queue=1207579, util=97.85% 00:18:42.241 nvme1n1: ios=32/6055, merge=0/0, ticks=38/1206980, in_queue=1207018, util=97.78% 00:18:42.241 nvme2n1: ios=5/10541, merge=0/0, ticks=5/1211124, in_queue=1211129, util=97.73% 00:18:42.241 nvme3n1: ios=0/9971, merge=0/0, ticks=0/1213844, in_queue=1213844, util=98.06% 00:18:42.241 nvme4n1: ios=0/6124, merge=0/0, ticks=0/1206931, in_queue=1206931, util=98.17% 00:18:42.241 nvme5n1: ios=0/10481, merge=0/0, ticks=0/1212429, in_queue=1212429, util=98.36% 00:18:42.241 nvme6n1: ios=0/6116, merge=0/0, ticks=0/1207702, in_queue=1207702, util=98.35% 00:18:42.241 nvme7n1: ios=0/9299, merge=0/0, ticks=0/1212416, in_queue=1212416, util=98.68% 00:18:42.241 nvme8n1: ios=0/6101, merge=0/0, ticks=0/1206871, in_queue=1206871, util=98.69% 00:18:42.241 nvme9n1: ios=0/9918, merge=0/0, ticks=0/1210951, in_queue=1210951, util=98.82% 00:18:42.241 22:38:41 -- target/multiconnection.sh@36 -- # sync 00:18:42.241 22:38:41 -- target/multiconnection.sh@37 -- # seq 1 11 00:18:42.241 22:38:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.241 22:38:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:42.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:42.241 22:38:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:18:42.241 22:38:41 -- common/autotest_common.sh@1208 -- # local i=0 00:18:42.241 22:38:41 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:42.241 22:38:41 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:18:42.241 22:38:41 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:18:42.241 22:38:41 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:42.241 22:38:41 -- common/autotest_common.sh@1220 -- # return 0 00:18:42.241 22:38:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:42.241 22:38:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.241 22:38:41 -- common/autotest_common.sh@10 -- # set +x 00:18:42.241 22:38:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.241 22:38:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.241 22:38:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:18:42.241 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:18:42.241 22:38:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:18:42.241 22:38:42 -- common/autotest_common.sh@1208 -- # local i=0 00:18:42.241 22:38:42 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:42.241 22:38:42 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:18:42.241 22:38:42 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:42.241 22:38:42 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:18:42.241 22:38:42 -- common/autotest_common.sh@1220 -- # return 0 00:18:42.241 22:38:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:42.241 22:38:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.241 22:38:42 -- common/autotest_common.sh@10 -- # set +x 00:18:42.241 22:38:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.241 22:38:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.241 22:38:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:18:42.241 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:18:42.241 22:38:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:18:42.241 22:38:42 -- common/autotest_common.sh@1208 -- # local i=0 00:18:42.241 22:38:42 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:42.241 22:38:42 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:18:42.241 22:38:42 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:42.241 22:38:42 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:18:42.241 22:38:42 -- common/autotest_common.sh@1220 -- # return 0 00:18:42.241 22:38:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:42.241 22:38:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.241 22:38:42 -- common/autotest_common.sh@10 -- # set +x 00:18:42.241 22:38:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.242 22:38:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.242 22:38:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:18:42.242 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:18:42.242 22:38:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:18:42.242 22:38:42 -- common/autotest_common.sh@1208 -- # local i=0 00:18:42.242 22:38:42 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:42.242 22:38:42 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:18:42.242 22:38:42 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:42.242 22:38:42 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:18:42.242 22:38:42 -- common/autotest_common.sh@1220 -- # return 0 00:18:42.242 22:38:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:18:42.242 22:38:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.242 22:38:42 -- common/autotest_common.sh@10 -- # set +x 00:18:42.242 22:38:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.242 22:38:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.242 22:38:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:18:42.242 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:18:42.242 22:38:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:18:42.242 22:38:42 -- common/autotest_common.sh@1208 -- # local i=0 00:18:42.242 22:38:42 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:42.242 22:38:42 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:18:42.242 22:38:42 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:18:42.242 22:38:42 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:42.242 22:38:42 -- common/autotest_common.sh@1220 -- # return 0 00:18:42.242 22:38:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:18:42.242 22:38:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.242 22:38:42 -- common/autotest_common.sh@10 -- # set +x 00:18:42.242 22:38:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.242 22:38:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.242 22:38:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:18:42.242 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:18:42.242 22:38:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:18:42.242 22:38:42 -- common/autotest_common.sh@1208 -- # local i=0 00:18:42.242 22:38:42 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:42.242 22:38:42 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:18:42.242 22:38:42 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:42.242 22:38:42 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:18:42.242 22:38:42 -- common/autotest_common.sh@1220 -- # return 0 00:18:42.242 22:38:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:18:42.242 22:38:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.242 22:38:42 -- common/autotest_common.sh@10 -- # set +x 00:18:42.242 22:38:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.242 22:38:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.242 22:38:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:18:42.242 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:18:42.242 22:38:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:18:42.242 22:38:42 -- common/autotest_common.sh@1208 -- # local i=0 00:18:42.242 22:38:42 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:42.242 22:38:42 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:18:42.242 22:38:42 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:42.242 22:38:42 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:18:42.242 22:38:42 -- common/autotest_common.sh@1220 -- # return 0 00:18:42.242 22:38:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:18:42.242 22:38:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.242 22:38:42 -- common/autotest_common.sh@10 -- # set +x 00:18:42.242 22:38:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.242 22:38:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.242 22:38:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:18:42.242 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:18:42.242 22:38:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:18:42.242 22:38:42 -- common/autotest_common.sh@1208 -- # local i=0 00:18:42.242 22:38:42 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:42.242 22:38:42 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:18:42.242 22:38:42 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:18:42.242 22:38:42 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:42.242 22:38:42 -- common/autotest_common.sh@1220 -- # return 0 00:18:42.242 22:38:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:18:42.242 22:38:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.242 22:38:42 -- common/autotest_common.sh@10 -- # set +x 00:18:42.242 22:38:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.242 22:38:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.242 22:38:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:18:42.242 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:18:42.242 22:38:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:18:42.242 22:38:42 -- common/autotest_common.sh@1208 -- # local i=0 00:18:42.242 22:38:42 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:42.242 22:38:42 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:18:42.242 22:38:42 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:18:42.242 22:38:42 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:42.501 22:38:42 -- common/autotest_common.sh@1220 -- # return 0 00:18:42.501 22:38:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:18:42.501 22:38:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.501 22:38:42 -- common/autotest_common.sh@10 -- # set +x 00:18:42.501 22:38:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.501 22:38:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.501 22:38:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:18:42.501 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:18:42.501 22:38:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:18:42.501 22:38:43 -- common/autotest_common.sh@1208 -- # local i=0 00:18:42.501 22:38:43 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:42.502 22:38:43 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:18:42.502 22:38:43 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:42.502 22:38:43 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:18:42.502 22:38:43 -- common/autotest_common.sh@1220 -- # return 0 00:18:42.502 22:38:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:18:42.502 22:38:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.502 22:38:43 -- common/autotest_common.sh@10 -- # set +x 00:18:42.502 22:38:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.502 22:38:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.502 22:38:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:18:42.761 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:18:42.761 22:38:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:18:42.761 22:38:43 -- common/autotest_common.sh@1208 -- # local i=0 00:18:42.761 22:38:43 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:42.761 22:38:43 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:18:42.761 22:38:43 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:42.761 22:38:43 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:18:42.761 22:38:43 -- common/autotest_common.sh@1220 -- # return 0 00:18:42.761 22:38:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:18:42.761 22:38:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.761 22:38:43 -- common/autotest_common.sh@10 -- # set +x 00:18:42.761 22:38:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.761 22:38:43 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:18:42.761 22:38:43 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:42.761 22:38:43 -- target/multiconnection.sh@47 -- # nvmftestfini 00:18:42.761 22:38:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:42.761 22:38:43 -- nvmf/common.sh@116 -- # sync 00:18:42.761 22:38:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:42.761 22:38:43 -- nvmf/common.sh@119 -- # set +e 00:18:42.761 22:38:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:42.761 22:38:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:42.761 rmmod nvme_tcp 00:18:42.761 rmmod nvme_fabrics 00:18:42.761 rmmod nvme_keyring 00:18:42.761 22:38:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:42.761 22:38:43 -- nvmf/common.sh@123 -- # set -e 00:18:42.761 22:38:43 -- nvmf/common.sh@124 -- # return 0 00:18:42.761 22:38:43 -- nvmf/common.sh@477 -- # '[' -n 90480 ']' 00:18:42.761 22:38:43 -- nvmf/common.sh@478 -- # killprocess 90480 00:18:42.761 22:38:43 -- common/autotest_common.sh@936 -- # '[' -z 90480 ']' 00:18:42.761 22:38:43 -- common/autotest_common.sh@940 -- # kill -0 90480 00:18:42.761 22:38:43 -- common/autotest_common.sh@941 -- # uname 00:18:42.761 22:38:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:42.761 22:38:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90480 00:18:42.761 22:38:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:42.761 22:38:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:42.761 killing process with pid 90480 00:18:42.761 22:38:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90480' 00:18:42.761 22:38:43 -- common/autotest_common.sh@955 -- # kill 90480 00:18:42.761 22:38:43 -- common/autotest_common.sh@960 -- # wait 90480 00:18:43.696 22:38:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:43.696 22:38:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:43.696 22:38:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:43.696 22:38:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:43.696 22:38:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:43.696 22:38:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.696 22:38:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:43.696 22:38:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.696 22:38:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:43.696 00:18:43.696 real 0m50.505s 00:18:43.696 user 2m54.676s 00:18:43.696 sys 0m21.670s 00:18:43.696 ************************************ 00:18:43.696 END TEST nvmf_multiconnection 00:18:43.696 ************************************ 00:18:43.696 22:38:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:43.696 22:38:44 -- common/autotest_common.sh@10 -- # set +x 00:18:43.696 22:38:44 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:43.696 22:38:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:43.696 22:38:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:43.696 22:38:44 -- common/autotest_common.sh@10 -- # set +x 00:18:43.696 ************************************ 00:18:43.696 START TEST nvmf_initiator_timeout 00:18:43.696 ************************************ 00:18:43.696 22:38:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:43.696 * Looking for test storage... 00:18:43.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:43.696 22:38:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:43.696 22:38:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:43.696 22:38:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:43.696 22:38:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:43.696 22:38:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:43.696 22:38:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:43.696 22:38:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:43.696 22:38:44 -- scripts/common.sh@335 -- # IFS=.-: 00:18:43.696 22:38:44 -- scripts/common.sh@335 -- # read -ra ver1 00:18:43.696 22:38:44 -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.696 22:38:44 -- scripts/common.sh@336 -- # read -ra ver2 00:18:43.696 22:38:44 -- scripts/common.sh@337 -- # local 'op=<' 00:18:43.696 22:38:44 -- scripts/common.sh@339 -- # ver1_l=2 00:18:43.696 22:38:44 -- scripts/common.sh@340 -- # ver2_l=1 00:18:43.696 22:38:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:43.696 22:38:44 -- scripts/common.sh@343 -- # case "$op" in 00:18:43.696 22:38:44 -- scripts/common.sh@344 -- # : 1 00:18:43.696 22:38:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:43.696 22:38:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.696 22:38:44 -- scripts/common.sh@364 -- # decimal 1 00:18:43.696 22:38:44 -- scripts/common.sh@352 -- # local d=1 00:18:43.696 22:38:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.696 22:38:44 -- scripts/common.sh@354 -- # echo 1 00:18:43.696 22:38:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:43.696 22:38:44 -- scripts/common.sh@365 -- # decimal 2 00:18:43.696 22:38:44 -- scripts/common.sh@352 -- # local d=2 00:18:43.696 22:38:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.696 22:38:44 -- scripts/common.sh@354 -- # echo 2 00:18:43.696 22:38:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:43.696 22:38:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:43.696 22:38:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:43.696 22:38:44 -- scripts/common.sh@367 -- # return 0 00:18:43.696 22:38:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.696 22:38:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:43.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.696 --rc genhtml_branch_coverage=1 00:18:43.696 --rc genhtml_function_coverage=1 00:18:43.696 --rc genhtml_legend=1 00:18:43.696 --rc geninfo_all_blocks=1 00:18:43.696 --rc geninfo_unexecuted_blocks=1 00:18:43.696 00:18:43.696 ' 00:18:43.696 22:38:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:43.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.696 --rc genhtml_branch_coverage=1 00:18:43.696 --rc genhtml_function_coverage=1 00:18:43.696 --rc genhtml_legend=1 00:18:43.696 --rc geninfo_all_blocks=1 00:18:43.696 --rc geninfo_unexecuted_blocks=1 00:18:43.696 00:18:43.696 ' 00:18:43.696 22:38:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:43.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.696 --rc genhtml_branch_coverage=1 00:18:43.696 --rc genhtml_function_coverage=1 00:18:43.696 --rc genhtml_legend=1 00:18:43.696 --rc geninfo_all_blocks=1 00:18:43.696 --rc geninfo_unexecuted_blocks=1 00:18:43.696 00:18:43.696 ' 00:18:43.696 22:38:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:43.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.697 --rc genhtml_branch_coverage=1 00:18:43.697 --rc genhtml_function_coverage=1 00:18:43.697 --rc genhtml_legend=1 00:18:43.697 --rc geninfo_all_blocks=1 00:18:43.697 --rc geninfo_unexecuted_blocks=1 00:18:43.697 00:18:43.697 ' 00:18:43.697 22:38:44 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:43.697 22:38:44 -- nvmf/common.sh@7 -- # uname -s 00:18:43.697 22:38:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.697 22:38:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.697 22:38:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.697 22:38:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.697 22:38:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.697 22:38:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.697 22:38:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.697 22:38:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.697 22:38:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.697 22:38:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.697 22:38:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:18:43.697 22:38:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:18:43.697 22:38:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.697 22:38:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.697 22:38:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:43.697 22:38:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:43.697 22:38:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.697 22:38:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.697 22:38:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.697 22:38:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.697 22:38:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.697 22:38:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.697 22:38:44 -- paths/export.sh@5 -- # export PATH 00:18:43.697 22:38:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.697 22:38:44 -- nvmf/common.sh@46 -- # : 0 00:18:43.697 22:38:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:43.697 22:38:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:43.697 22:38:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:43.697 22:38:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.697 22:38:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.697 22:38:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:43.697 22:38:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:43.697 22:38:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:43.697 22:38:44 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:43.697 22:38:44 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:43.697 22:38:44 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:18:43.697 22:38:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:43.697 22:38:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:43.697 22:38:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:43.697 22:38:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:43.697 22:38:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:43.697 22:38:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.697 22:38:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:43.697 22:38:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.697 22:38:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:43.697 22:38:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:43.697 22:38:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:43.697 22:38:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:43.697 22:38:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:43.697 22:38:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:43.697 22:38:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:43.697 22:38:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:43.697 22:38:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:43.697 22:38:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:43.697 22:38:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:43.697 22:38:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:43.697 22:38:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:43.697 22:38:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:43.697 22:38:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:43.697 22:38:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:43.697 22:38:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:43.697 22:38:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:43.697 22:38:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:43.697 22:38:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:43.697 Cannot find device "nvmf_tgt_br" 00:18:43.697 22:38:44 -- nvmf/common.sh@154 -- # true 00:18:43.697 22:38:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:43.697 Cannot find device "nvmf_tgt_br2" 00:18:43.697 22:38:44 -- nvmf/common.sh@155 -- # true 00:18:43.697 22:38:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:43.697 22:38:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:43.697 Cannot find device "nvmf_tgt_br" 00:18:43.697 22:38:44 -- nvmf/common.sh@157 -- # true 00:18:43.697 22:38:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:43.697 Cannot find device "nvmf_tgt_br2" 00:18:43.956 22:38:44 -- nvmf/common.sh@158 -- # true 00:18:43.956 22:38:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:43.956 22:38:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:43.956 22:38:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:43.956 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:43.956 22:38:44 -- nvmf/common.sh@161 -- # true 00:18:43.956 22:38:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:43.956 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:43.956 22:38:44 -- nvmf/common.sh@162 -- # true 00:18:43.956 22:38:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:43.956 22:38:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:43.956 22:38:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:43.956 22:38:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:43.956 22:38:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:43.956 22:38:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:43.956 22:38:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:43.956 22:38:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:43.956 22:38:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:43.956 22:38:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:43.956 22:38:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:43.956 22:38:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:43.956 22:38:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:43.956 22:38:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:43.956 22:38:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:43.956 22:38:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:43.956 22:38:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:43.956 22:38:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:43.956 22:38:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:43.956 22:38:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:43.956 22:38:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:43.956 22:38:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:43.956 22:38:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:43.956 22:38:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:43.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:43.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:18:43.956 00:18:43.956 --- 10.0.0.2 ping statistics --- 00:18:43.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.956 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:18:43.956 22:38:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:43.956 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:43.956 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:18:43.956 00:18:43.956 --- 10.0.0.3 ping statistics --- 00:18:43.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.956 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:18:43.956 22:38:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:43.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:43.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:43.956 00:18:43.956 --- 10.0.0.1 ping statistics --- 00:18:43.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.956 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:43.956 22:38:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:43.956 22:38:44 -- nvmf/common.sh@421 -- # return 0 00:18:43.956 22:38:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:43.956 22:38:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:43.956 22:38:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:43.956 22:38:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:43.956 22:38:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:43.956 22:38:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:43.956 22:38:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:44.214 22:38:44 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:18:44.214 22:38:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:44.214 22:38:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:44.214 22:38:44 -- common/autotest_common.sh@10 -- # set +x 00:18:44.214 22:38:44 -- nvmf/common.sh@469 -- # nvmfpid=91568 00:18:44.214 22:38:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:44.214 22:38:44 -- nvmf/common.sh@470 -- # waitforlisten 91568 00:18:44.214 22:38:44 -- common/autotest_common.sh@829 -- # '[' -z 91568 ']' 00:18:44.214 22:38:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.214 22:38:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:44.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.214 22:38:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.214 22:38:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:44.214 22:38:44 -- common/autotest_common.sh@10 -- # set +x 00:18:44.214 [2024-11-20 22:38:44.754794] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:44.214 [2024-11-20 22:38:44.754884] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.214 [2024-11-20 22:38:44.890634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:44.473 [2024-11-20 22:38:44.961367] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:44.473 [2024-11-20 22:38:44.961541] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.473 [2024-11-20 22:38:44.961556] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.473 [2024-11-20 22:38:44.961564] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.473 [2024-11-20 22:38:44.961720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.473 [2024-11-20 22:38:44.962400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.473 [2024-11-20 22:38:44.962484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:44.473 [2024-11-20 22:38:44.962490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.040 22:38:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:45.040 22:38:45 -- common/autotest_common.sh@862 -- # return 0 00:18:45.040 22:38:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:45.040 22:38:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:45.040 22:38:45 -- common/autotest_common.sh@10 -- # set +x 00:18:45.040 22:38:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.040 22:38:45 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:45.040 22:38:45 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:45.040 22:38:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.040 22:38:45 -- common/autotest_common.sh@10 -- # set +x 00:18:45.299 Malloc0 00:18:45.299 22:38:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.299 22:38:45 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:18:45.299 22:38:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.299 22:38:45 -- common/autotest_common.sh@10 -- # set +x 00:18:45.299 Delay0 00:18:45.299 22:38:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.299 22:38:45 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:45.299 22:38:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.299 22:38:45 -- common/autotest_common.sh@10 -- # set +x 00:18:45.299 [2024-11-20 22:38:45.810959] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.299 22:38:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.299 22:38:45 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:45.299 22:38:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.299 22:38:45 -- common/autotest_common.sh@10 -- # set +x 00:18:45.299 22:38:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.299 22:38:45 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:45.299 22:38:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.299 22:38:45 -- common/autotest_common.sh@10 -- # set +x 00:18:45.299 22:38:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.299 22:38:45 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:45.299 22:38:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.299 22:38:45 -- common/autotest_common.sh@10 -- # set +x 00:18:45.299 [2024-11-20 22:38:45.839172] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.299 22:38:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.299 22:38:45 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:45.299 22:38:46 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:18:45.299 22:38:46 -- common/autotest_common.sh@1187 -- # local i=0 00:18:45.299 22:38:46 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:45.299 22:38:46 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:45.299 22:38:46 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:47.831 22:38:48 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:47.831 22:38:48 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:47.831 22:38:48 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:18:47.831 22:38:48 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:47.831 22:38:48 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:47.831 22:38:48 -- common/autotest_common.sh@1197 -- # return 0 00:18:47.831 22:38:48 -- target/initiator_timeout.sh@35 -- # fio_pid=91649 00:18:47.831 22:38:48 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:18:47.831 22:38:48 -- target/initiator_timeout.sh@37 -- # sleep 3 00:18:47.831 [global] 00:18:47.831 thread=1 00:18:47.831 invalidate=1 00:18:47.831 rw=write 00:18:47.831 time_based=1 00:18:47.831 runtime=60 00:18:47.831 ioengine=libaio 00:18:47.831 direct=1 00:18:47.831 bs=4096 00:18:47.831 iodepth=1 00:18:47.831 norandommap=0 00:18:47.831 numjobs=1 00:18:47.831 00:18:47.831 verify_dump=1 00:18:47.831 verify_backlog=512 00:18:47.831 verify_state_save=0 00:18:47.831 do_verify=1 00:18:47.831 verify=crc32c-intel 00:18:47.831 [job0] 00:18:47.831 filename=/dev/nvme0n1 00:18:47.831 Could not set queue depth (nvme0n1) 00:18:47.831 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:47.831 fio-3.35 00:18:47.831 Starting 1 thread 00:18:50.365 22:38:51 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:18:50.365 22:38:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.365 22:38:51 -- common/autotest_common.sh@10 -- # set +x 00:18:50.365 true 00:18:50.365 22:38:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.365 22:38:51 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:18:50.365 22:38:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.365 22:38:51 -- common/autotest_common.sh@10 -- # set +x 00:18:50.365 true 00:18:50.365 22:38:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.365 22:38:51 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:18:50.365 22:38:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.365 22:38:51 -- common/autotest_common.sh@10 -- # set +x 00:18:50.365 true 00:18:50.365 22:38:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.365 22:38:51 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:18:50.365 22:38:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.365 22:38:51 -- common/autotest_common.sh@10 -- # set +x 00:18:50.365 true 00:18:50.365 22:38:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.365 22:38:51 -- target/initiator_timeout.sh@45 -- # sleep 3 00:18:53.651 22:38:54 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:18:53.651 22:38:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.651 22:38:54 -- common/autotest_common.sh@10 -- # set +x 00:18:53.651 true 00:18:53.651 22:38:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.651 22:38:54 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:18:53.651 22:38:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.651 22:38:54 -- common/autotest_common.sh@10 -- # set +x 00:18:53.651 true 00:18:53.651 22:38:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.651 22:38:54 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:18:53.651 22:38:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.651 22:38:54 -- common/autotest_common.sh@10 -- # set +x 00:18:53.651 true 00:18:53.651 22:38:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.651 22:38:54 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:18:53.651 22:38:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.651 22:38:54 -- common/autotest_common.sh@10 -- # set +x 00:18:53.651 true 00:18:53.651 22:38:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.651 22:38:54 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:18:53.651 22:38:54 -- target/initiator_timeout.sh@54 -- # wait 91649 00:19:49.877 00:19:49.877 job0: (groupid=0, jobs=1): err= 0: pid=91671: Wed Nov 20 22:39:48 2024 00:19:49.877 read: IOPS=822, BW=3290KiB/s (3369kB/s)(193MiB/60000msec) 00:19:49.877 slat (usec): min=10, max=9192, avg=13.10, stdev=55.38 00:19:49.877 clat (usec): min=3, max=40719k, avg=1022.44, stdev=183283.85 00:19:49.877 lat (usec): min=163, max=40719k, avg=1035.53, stdev=183283.86 00:19:49.877 clat percentiles (usec): 00:19:49.877 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:19:49.877 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 198], 00:19:49.877 | 70.00th=[ 204], 80.00th=[ 208], 90.00th=[ 219], 95.00th=[ 229], 00:19:49.877 | 99.00th=[ 251], 99.50th=[ 265], 99.90th=[ 310], 99.95th=[ 519], 00:19:49.877 | 99.99th=[ 963] 00:19:49.877 write: IOPS=827, BW=3311KiB/s (3390kB/s)(194MiB/60000msec); 0 zone resets 00:19:49.877 slat (usec): min=16, max=1571, avg=19.39, stdev= 9.24 00:19:49.877 clat (usec): min=120, max=556, avg=156.80, stdev=16.31 00:19:49.877 lat (usec): min=137, max=1891, avg=176.18, stdev=19.50 00:19:49.877 clat percentiles (usec): 00:19:49.877 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:19:49.877 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 157], 00:19:49.877 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 186], 00:19:49.877 | 99.00th=[ 206], 99.50th=[ 217], 99.90th=[ 247], 99.95th=[ 277], 00:19:49.877 | 99.99th=[ 523] 00:19:49.877 bw ( KiB/s): min= 1336, max=12288, per=100.00%, avg=9955.08, stdev=1977.63, samples=39 00:19:49.877 iops : min= 334, max= 3072, avg=2488.77, stdev=494.41, samples=39 00:19:49.877 lat (usec) : 4=0.01%, 20=0.01%, 250=99.42%, 500=0.55%, 750=0.02% 00:19:49.877 lat (usec) : 1000=0.01% 00:19:49.877 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:19:49.877 cpu : usr=0.48%, sys=2.03%, ctx=99072, majf=0, minf=5 00:19:49.877 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:49.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:49.877 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:49.877 issued rwts: total=49355,49664,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:49.877 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:49.877 00:19:49.877 Run status group 0 (all jobs): 00:19:49.877 READ: bw=3290KiB/s (3369kB/s), 3290KiB/s-3290KiB/s (3369kB/s-3369kB/s), io=193MiB (202MB), run=60000-60000msec 00:19:49.877 WRITE: bw=3311KiB/s (3390kB/s), 3311KiB/s-3311KiB/s (3390kB/s-3390kB/s), io=194MiB (203MB), run=60000-60000msec 00:19:49.877 00:19:49.877 Disk stats (read/write): 00:19:49.877 nvme0n1: ios=49488/49329, merge=0/0, ticks=10050/8180, in_queue=18230, util=99.77% 00:19:49.877 22:39:48 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:49.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:49.877 22:39:48 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:49.877 22:39:48 -- common/autotest_common.sh@1208 -- # local i=0 00:19:49.877 22:39:48 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:49.877 22:39:48 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:49.877 22:39:48 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:49.877 22:39:48 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:49.877 22:39:48 -- common/autotest_common.sh@1220 -- # return 0 00:19:49.877 22:39:48 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:19:49.877 22:39:48 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:19:49.877 nvmf hotplug test: fio successful as expected 00:19:49.877 22:39:48 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:49.877 22:39:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.877 22:39:48 -- common/autotest_common.sh@10 -- # set +x 00:19:49.877 22:39:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.877 22:39:48 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:19:49.877 22:39:48 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:19:49.877 22:39:48 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:19:49.877 22:39:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:49.877 22:39:48 -- nvmf/common.sh@116 -- # sync 00:19:49.877 22:39:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:49.877 22:39:48 -- nvmf/common.sh@119 -- # set +e 00:19:49.877 22:39:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:49.877 22:39:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:49.877 rmmod nvme_tcp 00:19:49.877 rmmod nvme_fabrics 00:19:49.877 rmmod nvme_keyring 00:19:49.877 22:39:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:49.877 22:39:48 -- nvmf/common.sh@123 -- # set -e 00:19:49.877 22:39:48 -- nvmf/common.sh@124 -- # return 0 00:19:49.877 22:39:48 -- nvmf/common.sh@477 -- # '[' -n 91568 ']' 00:19:49.877 22:39:48 -- nvmf/common.sh@478 -- # killprocess 91568 00:19:49.877 22:39:48 -- common/autotest_common.sh@936 -- # '[' -z 91568 ']' 00:19:49.877 22:39:48 -- common/autotest_common.sh@940 -- # kill -0 91568 00:19:49.877 22:39:48 -- common/autotest_common.sh@941 -- # uname 00:19:49.877 22:39:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:49.877 22:39:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91568 00:19:49.877 killing process with pid 91568 00:19:49.878 22:39:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:49.878 22:39:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:49.878 22:39:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91568' 00:19:49.878 22:39:48 -- common/autotest_common.sh@955 -- # kill 91568 00:19:49.878 22:39:48 -- common/autotest_common.sh@960 -- # wait 91568 00:19:49.878 22:39:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:49.878 22:39:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:49.878 22:39:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:49.878 22:39:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:49.878 22:39:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:49.878 22:39:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.878 22:39:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:49.878 22:39:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.878 22:39:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:49.878 ************************************ 00:19:49.878 END TEST nvmf_initiator_timeout 00:19:49.878 ************************************ 00:19:49.878 00:19:49.878 real 1m4.690s 00:19:49.878 user 4m7.886s 00:19:49.878 sys 0m7.433s 00:19:49.878 22:39:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:49.878 22:39:48 -- common/autotest_common.sh@10 -- # set +x 00:19:49.878 22:39:48 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:19:49.878 22:39:48 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:19:49.878 22:39:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:49.878 22:39:48 -- common/autotest_common.sh@10 -- # set +x 00:19:49.878 22:39:48 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:19:49.878 22:39:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:49.878 22:39:48 -- common/autotest_common.sh@10 -- # set +x 00:19:49.878 22:39:48 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:19:49.878 22:39:48 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:49.878 22:39:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:49.878 22:39:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:49.878 22:39:48 -- common/autotest_common.sh@10 -- # set +x 00:19:49.878 ************************************ 00:19:49.878 START TEST nvmf_multicontroller 00:19:49.878 ************************************ 00:19:49.878 22:39:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:49.878 * Looking for test storage... 00:19:49.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:49.878 22:39:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:49.878 22:39:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:49.878 22:39:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:49.878 22:39:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:49.878 22:39:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:49.878 22:39:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:49.878 22:39:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:49.878 22:39:49 -- scripts/common.sh@335 -- # IFS=.-: 00:19:49.878 22:39:49 -- scripts/common.sh@335 -- # read -ra ver1 00:19:49.878 22:39:49 -- scripts/common.sh@336 -- # IFS=.-: 00:19:49.878 22:39:49 -- scripts/common.sh@336 -- # read -ra ver2 00:19:49.878 22:39:49 -- scripts/common.sh@337 -- # local 'op=<' 00:19:49.878 22:39:49 -- scripts/common.sh@339 -- # ver1_l=2 00:19:49.878 22:39:49 -- scripts/common.sh@340 -- # ver2_l=1 00:19:49.878 22:39:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:49.878 22:39:49 -- scripts/common.sh@343 -- # case "$op" in 00:19:49.878 22:39:49 -- scripts/common.sh@344 -- # : 1 00:19:49.878 22:39:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:49.878 22:39:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:49.878 22:39:49 -- scripts/common.sh@364 -- # decimal 1 00:19:49.878 22:39:49 -- scripts/common.sh@352 -- # local d=1 00:19:49.878 22:39:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:49.878 22:39:49 -- scripts/common.sh@354 -- # echo 1 00:19:49.878 22:39:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:49.878 22:39:49 -- scripts/common.sh@365 -- # decimal 2 00:19:49.878 22:39:49 -- scripts/common.sh@352 -- # local d=2 00:19:49.878 22:39:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:49.878 22:39:49 -- scripts/common.sh@354 -- # echo 2 00:19:49.878 22:39:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:49.878 22:39:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:49.878 22:39:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:49.878 22:39:49 -- scripts/common.sh@367 -- # return 0 00:19:49.878 22:39:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:49.878 22:39:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:49.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.878 --rc genhtml_branch_coverage=1 00:19:49.878 --rc genhtml_function_coverage=1 00:19:49.878 --rc genhtml_legend=1 00:19:49.878 --rc geninfo_all_blocks=1 00:19:49.878 --rc geninfo_unexecuted_blocks=1 00:19:49.878 00:19:49.878 ' 00:19:49.878 22:39:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:49.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.878 --rc genhtml_branch_coverage=1 00:19:49.878 --rc genhtml_function_coverage=1 00:19:49.878 --rc genhtml_legend=1 00:19:49.878 --rc geninfo_all_blocks=1 00:19:49.878 --rc geninfo_unexecuted_blocks=1 00:19:49.878 00:19:49.878 ' 00:19:49.878 22:39:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:49.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.878 --rc genhtml_branch_coverage=1 00:19:49.878 --rc genhtml_function_coverage=1 00:19:49.878 --rc genhtml_legend=1 00:19:49.878 --rc geninfo_all_blocks=1 00:19:49.878 --rc geninfo_unexecuted_blocks=1 00:19:49.878 00:19:49.878 ' 00:19:49.878 22:39:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:49.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.878 --rc genhtml_branch_coverage=1 00:19:49.878 --rc genhtml_function_coverage=1 00:19:49.878 --rc genhtml_legend=1 00:19:49.878 --rc geninfo_all_blocks=1 00:19:49.878 --rc geninfo_unexecuted_blocks=1 00:19:49.878 00:19:49.878 ' 00:19:49.878 22:39:49 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:49.878 22:39:49 -- nvmf/common.sh@7 -- # uname -s 00:19:49.878 22:39:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:49.878 22:39:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:49.878 22:39:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:49.878 22:39:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:49.878 22:39:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:49.878 22:39:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:49.878 22:39:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:49.878 22:39:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:49.878 22:39:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:49.878 22:39:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:49.878 22:39:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:19:49.878 22:39:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:19:49.878 22:39:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:49.878 22:39:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:49.878 22:39:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:49.878 22:39:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:49.878 22:39:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:49.878 22:39:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:49.878 22:39:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:49.878 22:39:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.878 22:39:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.878 22:39:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.878 22:39:49 -- paths/export.sh@5 -- # export PATH 00:19:49.878 22:39:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.878 22:39:49 -- nvmf/common.sh@46 -- # : 0 00:19:49.878 22:39:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:49.878 22:39:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:49.878 22:39:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:49.878 22:39:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:49.878 22:39:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:49.878 22:39:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:49.878 22:39:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:49.878 22:39:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:49.878 22:39:49 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:49.878 22:39:49 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:49.878 22:39:49 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:49.878 22:39:49 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:49.878 22:39:49 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:49.878 22:39:49 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:49.878 22:39:49 -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:49.878 22:39:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:49.879 22:39:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:49.879 22:39:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:49.879 22:39:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:49.879 22:39:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:49.879 22:39:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.879 22:39:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:49.879 22:39:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.879 22:39:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:49.879 22:39:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:49.879 22:39:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:49.879 22:39:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:49.879 22:39:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:49.879 22:39:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:49.879 22:39:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:49.879 22:39:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:49.879 22:39:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:49.879 22:39:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:49.879 22:39:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:49.879 22:39:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:49.879 22:39:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:49.879 22:39:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.879 22:39:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:49.879 22:39:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:49.879 22:39:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:49.879 22:39:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:49.879 22:39:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:49.879 22:39:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:49.879 Cannot find device "nvmf_tgt_br" 00:19:49.879 22:39:49 -- nvmf/common.sh@154 -- # true 00:19:49.879 22:39:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:49.879 Cannot find device "nvmf_tgt_br2" 00:19:49.879 22:39:49 -- nvmf/common.sh@155 -- # true 00:19:49.879 22:39:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:49.879 22:39:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:49.879 Cannot find device "nvmf_tgt_br" 00:19:49.879 22:39:49 -- nvmf/common.sh@157 -- # true 00:19:49.879 22:39:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:49.879 Cannot find device "nvmf_tgt_br2" 00:19:49.879 22:39:49 -- nvmf/common.sh@158 -- # true 00:19:49.879 22:39:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:49.879 22:39:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:49.879 22:39:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:49.879 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:49.879 22:39:49 -- nvmf/common.sh@161 -- # true 00:19:49.879 22:39:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:49.879 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:49.879 22:39:49 -- nvmf/common.sh@162 -- # true 00:19:49.879 22:39:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:49.879 22:39:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:49.879 22:39:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:49.879 22:39:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:49.879 22:39:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:49.879 22:39:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:49.879 22:39:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:49.879 22:39:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:49.879 22:39:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:49.879 22:39:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:49.879 22:39:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:49.879 22:39:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:49.879 22:39:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:49.879 22:39:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:49.879 22:39:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:49.879 22:39:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:49.879 22:39:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:49.879 22:39:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:49.879 22:39:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:49.879 22:39:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:49.879 22:39:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:49.879 22:39:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:49.879 22:39:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:49.879 22:39:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:49.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:19:49.879 00:19:49.879 --- 10.0.0.2 ping statistics --- 00:19:49.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.879 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:19:49.879 22:39:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:49.879 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:49.879 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:19:49.879 00:19:49.879 --- 10.0.0.3 ping statistics --- 00:19:49.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.879 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:19:49.879 22:39:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:49.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:19:49.879 00:19:49.879 --- 10.0.0.1 ping statistics --- 00:19:49.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.879 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:19:49.879 22:39:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.879 22:39:49 -- nvmf/common.sh@421 -- # return 0 00:19:49.879 22:39:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:49.879 22:39:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.879 22:39:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:49.879 22:39:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:49.879 22:39:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.879 22:39:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:49.879 22:39:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:49.879 22:39:49 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:49.879 22:39:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:49.879 22:39:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:49.879 22:39:49 -- common/autotest_common.sh@10 -- # set +x 00:19:49.879 22:39:49 -- nvmf/common.sh@469 -- # nvmfpid=92513 00:19:49.879 22:39:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:49.879 22:39:49 -- nvmf/common.sh@470 -- # waitforlisten 92513 00:19:49.879 22:39:49 -- common/autotest_common.sh@829 -- # '[' -z 92513 ']' 00:19:49.879 22:39:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.879 22:39:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.879 22:39:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.879 22:39:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.879 22:39:49 -- common/autotest_common.sh@10 -- # set +x 00:19:49.879 [2024-11-20 22:39:49.545218] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:49.879 [2024-11-20 22:39:49.545340] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.879 [2024-11-20 22:39:49.683698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:49.879 [2024-11-20 22:39:49.750886] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:49.879 [2024-11-20 22:39:49.751032] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.879 [2024-11-20 22:39:49.751045] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.879 [2024-11-20 22:39:49.751053] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.879 [2024-11-20 22:39:49.751330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.879 [2024-11-20 22:39:49.751329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:49.879 [2024-11-20 22:39:49.751931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.879 22:39:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.879 22:39:50 -- common/autotest_common.sh@862 -- # return 0 00:19:49.879 22:39:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:49.879 22:39:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:49.879 22:39:50 -- common/autotest_common.sh@10 -- # set +x 00:19:49.879 22:39:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.879 22:39:50 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:49.879 22:39:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.879 22:39:50 -- common/autotest_common.sh@10 -- # set +x 00:19:49.879 [2024-11-20 22:39:50.492192] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.879 22:39:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.879 22:39:50 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:49.879 22:39:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.879 22:39:50 -- common/autotest_common.sh@10 -- # set +x 00:19:49.879 Malloc0 00:19:49.879 22:39:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.879 22:39:50 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:49.879 22:39:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.879 22:39:50 -- common/autotest_common.sh@10 -- # set +x 00:19:49.879 22:39:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.879 22:39:50 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:49.879 22:39:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.879 22:39:50 -- common/autotest_common.sh@10 -- # set +x 00:19:49.880 22:39:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.880 22:39:50 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:49.880 22:39:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.880 22:39:50 -- common/autotest_common.sh@10 -- # set +x 00:19:49.880 [2024-11-20 22:39:50.561709] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.880 22:39:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.880 22:39:50 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:49.880 22:39:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.880 22:39:50 -- common/autotest_common.sh@10 -- # set +x 00:19:49.880 [2024-11-20 22:39:50.569608] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:49.880 22:39:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.880 22:39:50 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:49.880 22:39:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.880 22:39:50 -- common/autotest_common.sh@10 -- # set +x 00:19:49.880 Malloc1 00:19:49.880 22:39:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.880 22:39:50 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:49.880 22:39:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.880 22:39:50 -- common/autotest_common.sh@10 -- # set +x 00:19:50.138 22:39:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.138 22:39:50 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:50.138 22:39:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.138 22:39:50 -- common/autotest_common.sh@10 -- # set +x 00:19:50.138 22:39:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.138 22:39:50 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:50.138 22:39:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.138 22:39:50 -- common/autotest_common.sh@10 -- # set +x 00:19:50.138 22:39:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.138 22:39:50 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:19:50.138 22:39:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.138 22:39:50 -- common/autotest_common.sh@10 -- # set +x 00:19:50.138 22:39:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.138 22:39:50 -- host/multicontroller.sh@44 -- # bdevperf_pid=92565 00:19:50.138 22:39:50 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:50.138 22:39:50 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:50.138 22:39:50 -- host/multicontroller.sh@47 -- # waitforlisten 92565 /var/tmp/bdevperf.sock 00:19:50.138 22:39:50 -- common/autotest_common.sh@829 -- # '[' -z 92565 ']' 00:19:50.138 22:39:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.138 22:39:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:50.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.138 22:39:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.138 22:39:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:50.138 22:39:50 -- common/autotest_common.sh@10 -- # set +x 00:19:51.073 22:39:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:51.073 22:39:51 -- common/autotest_common.sh@862 -- # return 0 00:19:51.073 22:39:51 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:51.073 22:39:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.073 22:39:51 -- common/autotest_common.sh@10 -- # set +x 00:19:51.073 NVMe0n1 00:19:51.073 22:39:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.073 22:39:51 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:51.073 22:39:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.073 22:39:51 -- common/autotest_common.sh@10 -- # set +x 00:19:51.073 22:39:51 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:19:51.073 22:39:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.073 1 00:19:51.073 22:39:51 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:51.073 22:39:51 -- common/autotest_common.sh@650 -- # local es=0 00:19:51.073 22:39:51 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:51.073 22:39:51 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:51.073 22:39:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:51.073 22:39:51 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:51.073 22:39:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:51.074 22:39:51 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:51.074 22:39:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.074 22:39:51 -- common/autotest_common.sh@10 -- # set +x 00:19:51.074 2024/11/20 22:39:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:51.074 request: 00:19:51.074 { 00:19:51.074 "method": "bdev_nvme_attach_controller", 00:19:51.074 "params": { 00:19:51.074 "name": "NVMe0", 00:19:51.074 "trtype": "tcp", 00:19:51.074 "traddr": "10.0.0.2", 00:19:51.074 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:19:51.074 "hostaddr": "10.0.0.2", 00:19:51.074 "hostsvcid": "60000", 00:19:51.074 "adrfam": "ipv4", 00:19:51.074 "trsvcid": "4420", 00:19:51.074 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:19:51.074 } 00:19:51.074 } 00:19:51.074 Got JSON-RPC error response 00:19:51.074 GoRPCClient: error on JSON-RPC call 00:19:51.074 22:39:51 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:51.074 22:39:51 -- common/autotest_common.sh@653 -- # es=1 00:19:51.074 22:39:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:51.074 22:39:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:51.074 22:39:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:51.074 22:39:51 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:51.074 22:39:51 -- common/autotest_common.sh@650 -- # local es=0 00:19:51.074 22:39:51 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:51.074 22:39:51 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:51.074 22:39:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:51.074 22:39:51 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:51.074 22:39:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:51.074 22:39:51 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:51.074 22:39:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.074 22:39:51 -- common/autotest_common.sh@10 -- # set +x 00:19:51.074 2024/11/20 22:39:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:51.074 request: 00:19:51.074 { 00:19:51.074 "method": "bdev_nvme_attach_controller", 00:19:51.074 "params": { 00:19:51.074 "name": "NVMe0", 00:19:51.074 "trtype": "tcp", 00:19:51.074 "traddr": "10.0.0.2", 00:19:51.074 "hostaddr": "10.0.0.2", 00:19:51.074 "hostsvcid": "60000", 00:19:51.074 "adrfam": "ipv4", 00:19:51.074 "trsvcid": "4420", 00:19:51.074 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:19:51.074 } 00:19:51.074 } 00:19:51.074 Got JSON-RPC error response 00:19:51.074 GoRPCClient: error on JSON-RPC call 00:19:51.074 22:39:51 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:51.074 22:39:51 -- common/autotest_common.sh@653 -- # es=1 00:19:51.074 22:39:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:51.074 22:39:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:51.074 22:39:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:51.074 22:39:51 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:51.074 22:39:51 -- common/autotest_common.sh@650 -- # local es=0 00:19:51.074 22:39:51 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:51.074 22:39:51 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:51.074 22:39:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:51.074 22:39:51 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:51.074 22:39:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:51.074 22:39:51 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:51.074 22:39:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.074 22:39:51 -- common/autotest_common.sh@10 -- # set +x 00:19:51.074 2024/11/20 22:39:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:19:51.074 request: 00:19:51.074 { 00:19:51.074 "method": "bdev_nvme_attach_controller", 00:19:51.074 "params": { 00:19:51.074 "name": "NVMe0", 00:19:51.074 "trtype": "tcp", 00:19:51.074 "traddr": "10.0.0.2", 00:19:51.074 "hostaddr": "10.0.0.2", 00:19:51.074 "hostsvcid": "60000", 00:19:51.074 "adrfam": "ipv4", 00:19:51.074 "trsvcid": "4420", 00:19:51.074 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.074 "multipath": "disable" 00:19:51.074 } 00:19:51.074 } 00:19:51.074 Got JSON-RPC error response 00:19:51.074 GoRPCClient: error on JSON-RPC call 00:19:51.074 22:39:51 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:51.074 22:39:51 -- common/autotest_common.sh@653 -- # es=1 00:19:51.074 22:39:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:51.074 22:39:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:51.074 22:39:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:51.074 22:39:51 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:51.074 22:39:51 -- common/autotest_common.sh@650 -- # local es=0 00:19:51.074 22:39:51 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:51.074 22:39:51 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:51.074 22:39:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:51.074 22:39:51 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:51.074 22:39:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:51.074 22:39:51 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:51.074 22:39:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.074 22:39:51 -- common/autotest_common.sh@10 -- # set +x 00:19:51.074 2024/11/20 22:39:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:51.074 request: 00:19:51.074 { 00:19:51.074 "method": "bdev_nvme_attach_controller", 00:19:51.074 "params": { 00:19:51.074 "name": "NVMe0", 00:19:51.074 "trtype": "tcp", 00:19:51.074 "traddr": "10.0.0.2", 00:19:51.074 "hostaddr": "10.0.0.2", 00:19:51.074 "hostsvcid": "60000", 00:19:51.074 "adrfam": "ipv4", 00:19:51.074 "trsvcid": "4420", 00:19:51.074 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.074 "multipath": "failover" 00:19:51.074 } 00:19:51.074 } 00:19:51.074 Got JSON-RPC error response 00:19:51.074 GoRPCClient: error on JSON-RPC call 00:19:51.074 22:39:51 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:51.074 22:39:51 -- common/autotest_common.sh@653 -- # es=1 00:19:51.074 22:39:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:51.074 22:39:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:51.074 22:39:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:51.074 22:39:51 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:51.074 22:39:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.074 22:39:51 -- common/autotest_common.sh@10 -- # set +x 00:19:51.333 00:19:51.333 22:39:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.333 22:39:51 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:51.333 22:39:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.333 22:39:51 -- common/autotest_common.sh@10 -- # set +x 00:19:51.333 22:39:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.333 22:39:51 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:51.333 22:39:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.333 22:39:51 -- common/autotest_common.sh@10 -- # set +x 00:19:51.333 00:19:51.333 22:39:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.333 22:39:51 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:51.333 22:39:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.333 22:39:51 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:19:51.333 22:39:51 -- common/autotest_common.sh@10 -- # set +x 00:19:51.333 22:39:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.333 22:39:51 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:19:51.333 22:39:51 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:52.709 0 00:19:52.709 22:39:53 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:19:52.709 22:39:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.709 22:39:53 -- common/autotest_common.sh@10 -- # set +x 00:19:52.709 22:39:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.709 22:39:53 -- host/multicontroller.sh@100 -- # killprocess 92565 00:19:52.709 22:39:53 -- common/autotest_common.sh@936 -- # '[' -z 92565 ']' 00:19:52.709 22:39:53 -- common/autotest_common.sh@940 -- # kill -0 92565 00:19:52.709 22:39:53 -- common/autotest_common.sh@941 -- # uname 00:19:52.709 22:39:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:52.709 22:39:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92565 00:19:52.709 killing process with pid 92565 00:19:52.709 22:39:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:52.709 22:39:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:52.709 22:39:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92565' 00:19:52.709 22:39:53 -- common/autotest_common.sh@955 -- # kill 92565 00:19:52.709 22:39:53 -- common/autotest_common.sh@960 -- # wait 92565 00:19:52.709 22:39:53 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:52.709 22:39:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.709 22:39:53 -- common/autotest_common.sh@10 -- # set +x 00:19:52.709 22:39:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.709 22:39:53 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:52.709 22:39:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.709 22:39:53 -- common/autotest_common.sh@10 -- # set +x 00:19:52.709 22:39:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.709 22:39:53 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:19:52.709 22:39:53 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:52.709 22:39:53 -- common/autotest_common.sh@1607 -- # read -r file 00:19:52.969 22:39:53 -- common/autotest_common.sh@1606 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:19:52.969 22:39:53 -- common/autotest_common.sh@1606 -- # sort -u 00:19:52.969 22:39:53 -- common/autotest_common.sh@1608 -- # cat 00:19:52.969 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:52.969 [2024-11-20 22:39:50.688852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:52.969 [2024-11-20 22:39:50.688950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92565 ] 00:19:52.969 [2024-11-20 22:39:50.829967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.969 [2024-11-20 22:39:50.912462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.969 [2024-11-20 22:39:51.933216] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 43768e1d-6da5-40d0-bfb5-cd2b82f66a90 already exists 00:19:52.969 [2024-11-20 22:39:51.933269] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:43768e1d-6da5-40d0-bfb5-cd2b82f66a90 alias for bdev NVMe1n1 00:19:52.969 [2024-11-20 22:39:51.933299] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:19:52.969 Running I/O for 1 seconds... 00:19:52.969 00:19:52.969 Latency(us) 00:19:52.969 [2024-11-20T22:39:53.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.969 [2024-11-20T22:39:53.703Z] Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:52.969 NVMe0n1 : 1.00 23274.49 90.92 0.00 0.00 5492.71 3083.17 13762.56 00:19:52.969 [2024-11-20T22:39:53.703Z] =================================================================================================================== 00:19:52.969 [2024-11-20T22:39:53.703Z] Total : 23274.49 90.92 0.00 0.00 5492.71 3083.17 13762.56 00:19:52.969 Received shutdown signal, test time was about 1.000000 seconds 00:19:52.969 00:19:52.969 Latency(us) 00:19:52.969 [2024-11-20T22:39:53.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.969 [2024-11-20T22:39:53.703Z] =================================================================================================================== 00:19:52.969 [2024-11-20T22:39:53.703Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:52.969 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:52.969 22:39:53 -- common/autotest_common.sh@1613 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:52.969 22:39:53 -- common/autotest_common.sh@1607 -- # read -r file 00:19:52.969 22:39:53 -- host/multicontroller.sh@108 -- # nvmftestfini 00:19:52.969 22:39:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:52.969 22:39:53 -- nvmf/common.sh@116 -- # sync 00:19:52.969 22:39:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:52.969 22:39:53 -- nvmf/common.sh@119 -- # set +e 00:19:52.969 22:39:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:52.969 22:39:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:52.969 rmmod nvme_tcp 00:19:52.969 rmmod nvme_fabrics 00:19:52.969 rmmod nvme_keyring 00:19:52.969 22:39:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:52.969 22:39:53 -- nvmf/common.sh@123 -- # set -e 00:19:52.969 22:39:53 -- nvmf/common.sh@124 -- # return 0 00:19:52.969 22:39:53 -- nvmf/common.sh@477 -- # '[' -n 92513 ']' 00:19:52.969 22:39:53 -- nvmf/common.sh@478 -- # killprocess 92513 00:19:52.969 22:39:53 -- common/autotest_common.sh@936 -- # '[' -z 92513 ']' 00:19:52.969 22:39:53 -- common/autotest_common.sh@940 -- # kill -0 92513 00:19:52.969 22:39:53 -- common/autotest_common.sh@941 -- # uname 00:19:52.969 22:39:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:52.969 22:39:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92513 00:19:52.969 killing process with pid 92513 00:19:52.969 22:39:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:52.969 22:39:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:52.969 22:39:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92513' 00:19:52.969 22:39:53 -- common/autotest_common.sh@955 -- # kill 92513 00:19:52.969 22:39:53 -- common/autotest_common.sh@960 -- # wait 92513 00:19:53.228 22:39:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:53.228 22:39:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:53.228 22:39:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:53.228 22:39:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:53.228 22:39:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:53.228 22:39:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.228 22:39:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.228 22:39:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.487 22:39:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:53.487 ************************************ 00:19:53.487 END TEST nvmf_multicontroller 00:19:53.487 ************************************ 00:19:53.487 00:19:53.487 real 0m5.034s 00:19:53.487 user 0m15.548s 00:19:53.487 sys 0m1.209s 00:19:53.487 22:39:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:53.487 22:39:53 -- common/autotest_common.sh@10 -- # set +x 00:19:53.487 22:39:54 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:53.487 22:39:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:53.487 22:39:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:53.487 22:39:54 -- common/autotest_common.sh@10 -- # set +x 00:19:53.487 ************************************ 00:19:53.487 START TEST nvmf_aer 00:19:53.487 ************************************ 00:19:53.487 22:39:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:53.487 * Looking for test storage... 00:19:53.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:53.487 22:39:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:53.487 22:39:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:53.487 22:39:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:53.487 22:39:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:53.487 22:39:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:53.487 22:39:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:53.487 22:39:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:53.487 22:39:54 -- scripts/common.sh@335 -- # IFS=.-: 00:19:53.487 22:39:54 -- scripts/common.sh@335 -- # read -ra ver1 00:19:53.487 22:39:54 -- scripts/common.sh@336 -- # IFS=.-: 00:19:53.487 22:39:54 -- scripts/common.sh@336 -- # read -ra ver2 00:19:53.487 22:39:54 -- scripts/common.sh@337 -- # local 'op=<' 00:19:53.487 22:39:54 -- scripts/common.sh@339 -- # ver1_l=2 00:19:53.487 22:39:54 -- scripts/common.sh@340 -- # ver2_l=1 00:19:53.487 22:39:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:53.487 22:39:54 -- scripts/common.sh@343 -- # case "$op" in 00:19:53.487 22:39:54 -- scripts/common.sh@344 -- # : 1 00:19:53.487 22:39:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:53.487 22:39:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:53.487 22:39:54 -- scripts/common.sh@364 -- # decimal 1 00:19:53.487 22:39:54 -- scripts/common.sh@352 -- # local d=1 00:19:53.487 22:39:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:53.487 22:39:54 -- scripts/common.sh@354 -- # echo 1 00:19:53.487 22:39:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:53.487 22:39:54 -- scripts/common.sh@365 -- # decimal 2 00:19:53.487 22:39:54 -- scripts/common.sh@352 -- # local d=2 00:19:53.487 22:39:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:53.487 22:39:54 -- scripts/common.sh@354 -- # echo 2 00:19:53.487 22:39:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:53.487 22:39:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:53.487 22:39:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:53.487 22:39:54 -- scripts/common.sh@367 -- # return 0 00:19:53.487 22:39:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:53.746 22:39:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:53.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.746 --rc genhtml_branch_coverage=1 00:19:53.746 --rc genhtml_function_coverage=1 00:19:53.746 --rc genhtml_legend=1 00:19:53.746 --rc geninfo_all_blocks=1 00:19:53.746 --rc geninfo_unexecuted_blocks=1 00:19:53.746 00:19:53.746 ' 00:19:53.746 22:39:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:53.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.746 --rc genhtml_branch_coverage=1 00:19:53.746 --rc genhtml_function_coverage=1 00:19:53.746 --rc genhtml_legend=1 00:19:53.746 --rc geninfo_all_blocks=1 00:19:53.746 --rc geninfo_unexecuted_blocks=1 00:19:53.746 00:19:53.746 ' 00:19:53.746 22:39:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:53.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.746 --rc genhtml_branch_coverage=1 00:19:53.746 --rc genhtml_function_coverage=1 00:19:53.746 --rc genhtml_legend=1 00:19:53.746 --rc geninfo_all_blocks=1 00:19:53.746 --rc geninfo_unexecuted_blocks=1 00:19:53.746 00:19:53.746 ' 00:19:53.746 22:39:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:53.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.746 --rc genhtml_branch_coverage=1 00:19:53.746 --rc genhtml_function_coverage=1 00:19:53.746 --rc genhtml_legend=1 00:19:53.746 --rc geninfo_all_blocks=1 00:19:53.746 --rc geninfo_unexecuted_blocks=1 00:19:53.746 00:19:53.746 ' 00:19:53.746 22:39:54 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:53.746 22:39:54 -- nvmf/common.sh@7 -- # uname -s 00:19:53.746 22:39:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:53.746 22:39:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:53.746 22:39:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:53.746 22:39:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:53.746 22:39:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:53.746 22:39:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:53.746 22:39:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:53.746 22:39:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:53.746 22:39:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:53.746 22:39:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:53.746 22:39:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:19:53.746 22:39:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:19:53.746 22:39:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:53.746 22:39:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:53.746 22:39:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:53.746 22:39:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:53.746 22:39:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.746 22:39:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.746 22:39:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.746 22:39:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.746 22:39:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.746 22:39:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.747 22:39:54 -- paths/export.sh@5 -- # export PATH 00:19:53.747 22:39:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.747 22:39:54 -- nvmf/common.sh@46 -- # : 0 00:19:53.747 22:39:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:53.747 22:39:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:53.747 22:39:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:53.747 22:39:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:53.747 22:39:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:53.747 22:39:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:53.747 22:39:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:53.747 22:39:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:53.747 22:39:54 -- host/aer.sh@11 -- # nvmftestinit 00:19:53.747 22:39:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:53.747 22:39:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:53.747 22:39:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:53.747 22:39:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:53.747 22:39:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:53.747 22:39:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.747 22:39:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.747 22:39:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.747 22:39:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:53.747 22:39:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:53.747 22:39:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:53.747 22:39:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:53.747 22:39:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:53.747 22:39:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:53.747 22:39:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:53.747 22:39:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:53.747 22:39:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:53.747 22:39:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:53.747 22:39:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:53.747 22:39:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:53.747 22:39:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:53.747 22:39:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:53.747 22:39:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:53.747 22:39:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:53.747 22:39:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:53.747 22:39:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:53.747 22:39:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:53.747 22:39:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:53.747 Cannot find device "nvmf_tgt_br" 00:19:53.747 22:39:54 -- nvmf/common.sh@154 -- # true 00:19:53.747 22:39:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:53.747 Cannot find device "nvmf_tgt_br2" 00:19:53.747 22:39:54 -- nvmf/common.sh@155 -- # true 00:19:53.747 22:39:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:53.747 22:39:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:53.747 Cannot find device "nvmf_tgt_br" 00:19:53.747 22:39:54 -- nvmf/common.sh@157 -- # true 00:19:53.747 22:39:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:53.747 Cannot find device "nvmf_tgt_br2" 00:19:53.747 22:39:54 -- nvmf/common.sh@158 -- # true 00:19:53.747 22:39:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:53.747 22:39:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:53.747 22:39:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:53.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:53.747 22:39:54 -- nvmf/common.sh@161 -- # true 00:19:53.747 22:39:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:53.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:53.747 22:39:54 -- nvmf/common.sh@162 -- # true 00:19:53.747 22:39:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:53.747 22:39:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:53.747 22:39:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:53.747 22:39:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:53.747 22:39:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:53.747 22:39:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:53.747 22:39:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:53.747 22:39:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:53.747 22:39:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:54.005 22:39:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:54.006 22:39:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:54.006 22:39:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:54.006 22:39:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:54.006 22:39:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:54.006 22:39:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:54.006 22:39:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:54.006 22:39:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:54.006 22:39:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:54.006 22:39:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:54.006 22:39:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:54.006 22:39:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:54.006 22:39:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:54.006 22:39:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:54.006 22:39:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:54.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:54.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:19:54.006 00:19:54.006 --- 10.0.0.2 ping statistics --- 00:19:54.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.006 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:19:54.006 22:39:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:54.006 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:54.006 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:19:54.006 00:19:54.006 --- 10.0.0.3 ping statistics --- 00:19:54.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.006 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:54.006 22:39:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:54.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:54.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:19:54.006 00:19:54.006 --- 10.0.0.1 ping statistics --- 00:19:54.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.006 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:19:54.006 22:39:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:54.006 22:39:54 -- nvmf/common.sh@421 -- # return 0 00:19:54.006 22:39:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:54.006 22:39:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:54.006 22:39:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:54.006 22:39:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:54.006 22:39:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:54.006 22:39:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:54.006 22:39:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:54.006 22:39:54 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:54.006 22:39:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:54.006 22:39:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:54.006 22:39:54 -- common/autotest_common.sh@10 -- # set +x 00:19:54.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.006 22:39:54 -- nvmf/common.sh@469 -- # nvmfpid=92824 00:19:54.006 22:39:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:54.006 22:39:54 -- nvmf/common.sh@470 -- # waitforlisten 92824 00:19:54.006 22:39:54 -- common/autotest_common.sh@829 -- # '[' -z 92824 ']' 00:19:54.006 22:39:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.006 22:39:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:54.006 22:39:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.006 22:39:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:54.006 22:39:54 -- common/autotest_common.sh@10 -- # set +x 00:19:54.006 [2024-11-20 22:39:54.663351] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:54.006 [2024-11-20 22:39:54.663617] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.264 [2024-11-20 22:39:54.803952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:54.264 [2024-11-20 22:39:54.887660] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:54.264 [2024-11-20 22:39:54.888190] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:54.264 [2024-11-20 22:39:54.888353] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:54.264 [2024-11-20 22:39:54.888580] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:54.264 [2024-11-20 22:39:54.888872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.264 [2024-11-20 22:39:54.889106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.264 [2024-11-20 22:39:54.888962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.264 [2024-11-20 22:39:54.889099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:55.199 22:39:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:55.199 22:39:55 -- common/autotest_common.sh@862 -- # return 0 00:19:55.199 22:39:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:55.199 22:39:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:55.200 22:39:55 -- common/autotest_common.sh@10 -- # set +x 00:19:55.200 22:39:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.200 22:39:55 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:55.200 22:39:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.200 22:39:55 -- common/autotest_common.sh@10 -- # set +x 00:19:55.200 [2024-11-20 22:39:55.717410] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.200 22:39:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.200 22:39:55 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:55.200 22:39:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.200 22:39:55 -- common/autotest_common.sh@10 -- # set +x 00:19:55.200 Malloc0 00:19:55.200 22:39:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.200 22:39:55 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:55.200 22:39:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.200 22:39:55 -- common/autotest_common.sh@10 -- # set +x 00:19:55.200 22:39:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.200 22:39:55 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:55.200 22:39:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.200 22:39:55 -- common/autotest_common.sh@10 -- # set +x 00:19:55.200 22:39:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.200 22:39:55 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:55.200 22:39:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.200 22:39:55 -- common/autotest_common.sh@10 -- # set +x 00:19:55.200 [2024-11-20 22:39:55.791958] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.200 22:39:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.200 22:39:55 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:55.200 22:39:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.200 22:39:55 -- common/autotest_common.sh@10 -- # set +x 00:19:55.200 [2024-11-20 22:39:55.799685] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:19:55.200 [ 00:19:55.200 { 00:19:55.200 "allow_any_host": true, 00:19:55.200 "hosts": [], 00:19:55.200 "listen_addresses": [], 00:19:55.200 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:55.200 "subtype": "Discovery" 00:19:55.200 }, 00:19:55.200 { 00:19:55.200 "allow_any_host": true, 00:19:55.200 "hosts": [], 00:19:55.200 "listen_addresses": [ 00:19:55.200 { 00:19:55.200 "adrfam": "IPv4", 00:19:55.200 "traddr": "10.0.0.2", 00:19:55.200 "transport": "TCP", 00:19:55.200 "trsvcid": "4420", 00:19:55.200 "trtype": "TCP" 00:19:55.200 } 00:19:55.200 ], 00:19:55.200 "max_cntlid": 65519, 00:19:55.200 "max_namespaces": 2, 00:19:55.200 "min_cntlid": 1, 00:19:55.200 "model_number": "SPDK bdev Controller", 00:19:55.200 "namespaces": [ 00:19:55.200 { 00:19:55.200 "bdev_name": "Malloc0", 00:19:55.200 "name": "Malloc0", 00:19:55.200 "nguid": "DCC39A2D0715434C8151C12CF7BCC1C4", 00:19:55.200 "nsid": 1, 00:19:55.200 "uuid": "dcc39a2d-0715-434c-8151-c12cf7bcc1c4" 00:19:55.200 } 00:19:55.200 ], 00:19:55.200 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.200 "serial_number": "SPDK00000000000001", 00:19:55.200 "subtype": "NVMe" 00:19:55.200 } 00:19:55.200 ] 00:19:55.200 22:39:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.200 22:39:55 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:55.200 22:39:55 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:55.200 22:39:55 -- host/aer.sh@33 -- # aerpid=92878 00:19:55.200 22:39:55 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:55.200 22:39:55 -- common/autotest_common.sh@1254 -- # local i=0 00:19:55.200 22:39:55 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:55.200 22:39:55 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:55.200 22:39:55 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:19:55.200 22:39:55 -- common/autotest_common.sh@1257 -- # i=1 00:19:55.200 22:39:55 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:19:55.200 22:39:55 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:55.200 22:39:55 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:19:55.200 22:39:55 -- common/autotest_common.sh@1257 -- # i=2 00:19:55.200 22:39:55 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:19:55.459 22:39:56 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:55.459 22:39:56 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:55.459 22:39:56 -- common/autotest_common.sh@1265 -- # return 0 00:19:55.459 22:39:56 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:55.459 22:39:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.459 22:39:56 -- common/autotest_common.sh@10 -- # set +x 00:19:55.459 Malloc1 00:19:55.459 22:39:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.459 22:39:56 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:55.459 22:39:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.459 22:39:56 -- common/autotest_common.sh@10 -- # set +x 00:19:55.459 22:39:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.459 22:39:56 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:55.459 22:39:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.459 22:39:56 -- common/autotest_common.sh@10 -- # set +x 00:19:55.459 Asynchronous Event Request test 00:19:55.459 Attaching to 10.0.0.2 00:19:55.459 Attached to 10.0.0.2 00:19:55.459 Registering asynchronous event callbacks... 00:19:55.459 Starting namespace attribute notice tests for all controllers... 00:19:55.459 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:55.459 aer_cb - Changed Namespace 00:19:55.459 Cleaning up... 00:19:55.459 [ 00:19:55.459 { 00:19:55.459 "allow_any_host": true, 00:19:55.459 "hosts": [], 00:19:55.459 "listen_addresses": [], 00:19:55.459 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:55.459 "subtype": "Discovery" 00:19:55.459 }, 00:19:55.459 { 00:19:55.459 "allow_any_host": true, 00:19:55.459 "hosts": [], 00:19:55.459 "listen_addresses": [ 00:19:55.459 { 00:19:55.459 "adrfam": "IPv4", 00:19:55.459 "traddr": "10.0.0.2", 00:19:55.459 "transport": "TCP", 00:19:55.459 "trsvcid": "4420", 00:19:55.459 "trtype": "TCP" 00:19:55.459 } 00:19:55.459 ], 00:19:55.459 "max_cntlid": 65519, 00:19:55.459 "max_namespaces": 2, 00:19:55.459 "min_cntlid": 1, 00:19:55.459 "model_number": "SPDK bdev Controller", 00:19:55.459 "namespaces": [ 00:19:55.459 { 00:19:55.459 "bdev_name": "Malloc0", 00:19:55.459 "name": "Malloc0", 00:19:55.459 "nguid": "DCC39A2D0715434C8151C12CF7BCC1C4", 00:19:55.459 "nsid": 1, 00:19:55.459 "uuid": "dcc39a2d-0715-434c-8151-c12cf7bcc1c4" 00:19:55.459 }, 00:19:55.459 { 00:19:55.459 "bdev_name": "Malloc1", 00:19:55.459 "name": "Malloc1", 00:19:55.459 "nguid": "1F584CA5B9954F9EB56BD0DD7D6AC3C7", 00:19:55.459 "nsid": 2, 00:19:55.459 "uuid": "1f584ca5-b995-4f9e-b56b-d0dd7d6ac3c7" 00:19:55.459 } 00:19:55.459 ], 00:19:55.459 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.459 "serial_number": "SPDK00000000000001", 00:19:55.459 "subtype": "NVMe" 00:19:55.459 } 00:19:55.459 ] 00:19:55.459 22:39:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.459 22:39:56 -- host/aer.sh@43 -- # wait 92878 00:19:55.459 22:39:56 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:55.459 22:39:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.459 22:39:56 -- common/autotest_common.sh@10 -- # set +x 00:19:55.459 22:39:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.459 22:39:56 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:55.459 22:39:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.459 22:39:56 -- common/autotest_common.sh@10 -- # set +x 00:19:55.718 22:39:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.718 22:39:56 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:55.718 22:39:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.718 22:39:56 -- common/autotest_common.sh@10 -- # set +x 00:19:55.718 22:39:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.718 22:39:56 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:55.718 22:39:56 -- host/aer.sh@51 -- # nvmftestfini 00:19:55.718 22:39:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:55.718 22:39:56 -- nvmf/common.sh@116 -- # sync 00:19:55.718 22:39:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:55.718 22:39:56 -- nvmf/common.sh@119 -- # set +e 00:19:55.718 22:39:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:55.718 22:39:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:55.718 rmmod nvme_tcp 00:19:55.718 rmmod nvme_fabrics 00:19:55.718 rmmod nvme_keyring 00:19:55.718 22:39:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:55.718 22:39:56 -- nvmf/common.sh@123 -- # set -e 00:19:55.718 22:39:56 -- nvmf/common.sh@124 -- # return 0 00:19:55.718 22:39:56 -- nvmf/common.sh@477 -- # '[' -n 92824 ']' 00:19:55.718 22:39:56 -- nvmf/common.sh@478 -- # killprocess 92824 00:19:55.718 22:39:56 -- common/autotest_common.sh@936 -- # '[' -z 92824 ']' 00:19:55.718 22:39:56 -- common/autotest_common.sh@940 -- # kill -0 92824 00:19:55.718 22:39:56 -- common/autotest_common.sh@941 -- # uname 00:19:55.718 22:39:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:55.718 22:39:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92824 00:19:55.718 22:39:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:55.718 22:39:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:55.718 killing process with pid 92824 00:19:55.718 22:39:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92824' 00:19:55.718 22:39:56 -- common/autotest_common.sh@955 -- # kill 92824 00:19:55.718 [2024-11-20 22:39:56.385049] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:19:55.718 22:39:56 -- common/autotest_common.sh@960 -- # wait 92824 00:19:55.977 22:39:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:55.977 22:39:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:55.977 22:39:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:55.977 22:39:56 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:55.977 22:39:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:55.977 22:39:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.977 22:39:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:55.977 22:39:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.977 22:39:56 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:55.977 00:19:55.977 real 0m2.645s 00:19:55.977 user 0m7.211s 00:19:55.977 sys 0m0.753s 00:19:55.977 22:39:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:55.977 22:39:56 -- common/autotest_common.sh@10 -- # set +x 00:19:55.977 ************************************ 00:19:55.977 END TEST nvmf_aer 00:19:55.977 ************************************ 00:19:56.236 22:39:56 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:56.236 22:39:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:56.236 22:39:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:56.236 22:39:56 -- common/autotest_common.sh@10 -- # set +x 00:19:56.236 ************************************ 00:19:56.236 START TEST nvmf_async_init 00:19:56.236 ************************************ 00:19:56.236 22:39:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:56.236 * Looking for test storage... 00:19:56.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:56.236 22:39:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:56.236 22:39:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:56.236 22:39:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:56.236 22:39:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:56.236 22:39:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:56.236 22:39:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:56.236 22:39:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:56.236 22:39:56 -- scripts/common.sh@335 -- # IFS=.-: 00:19:56.236 22:39:56 -- scripts/common.sh@335 -- # read -ra ver1 00:19:56.236 22:39:56 -- scripts/common.sh@336 -- # IFS=.-: 00:19:56.236 22:39:56 -- scripts/common.sh@336 -- # read -ra ver2 00:19:56.236 22:39:56 -- scripts/common.sh@337 -- # local 'op=<' 00:19:56.236 22:39:56 -- scripts/common.sh@339 -- # ver1_l=2 00:19:56.236 22:39:56 -- scripts/common.sh@340 -- # ver2_l=1 00:19:56.236 22:39:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:56.236 22:39:56 -- scripts/common.sh@343 -- # case "$op" in 00:19:56.236 22:39:56 -- scripts/common.sh@344 -- # : 1 00:19:56.236 22:39:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:56.236 22:39:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:56.236 22:39:56 -- scripts/common.sh@364 -- # decimal 1 00:19:56.236 22:39:56 -- scripts/common.sh@352 -- # local d=1 00:19:56.236 22:39:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:56.236 22:39:56 -- scripts/common.sh@354 -- # echo 1 00:19:56.236 22:39:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:56.236 22:39:56 -- scripts/common.sh@365 -- # decimal 2 00:19:56.236 22:39:56 -- scripts/common.sh@352 -- # local d=2 00:19:56.236 22:39:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:56.236 22:39:56 -- scripts/common.sh@354 -- # echo 2 00:19:56.236 22:39:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:56.236 22:39:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:56.236 22:39:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:56.236 22:39:56 -- scripts/common.sh@367 -- # return 0 00:19:56.236 22:39:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:56.236 22:39:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:56.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.236 --rc genhtml_branch_coverage=1 00:19:56.236 --rc genhtml_function_coverage=1 00:19:56.236 --rc genhtml_legend=1 00:19:56.236 --rc geninfo_all_blocks=1 00:19:56.236 --rc geninfo_unexecuted_blocks=1 00:19:56.236 00:19:56.236 ' 00:19:56.236 22:39:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:56.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.236 --rc genhtml_branch_coverage=1 00:19:56.236 --rc genhtml_function_coverage=1 00:19:56.236 --rc genhtml_legend=1 00:19:56.236 --rc geninfo_all_blocks=1 00:19:56.236 --rc geninfo_unexecuted_blocks=1 00:19:56.236 00:19:56.236 ' 00:19:56.236 22:39:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:56.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.236 --rc genhtml_branch_coverage=1 00:19:56.236 --rc genhtml_function_coverage=1 00:19:56.236 --rc genhtml_legend=1 00:19:56.236 --rc geninfo_all_blocks=1 00:19:56.236 --rc geninfo_unexecuted_blocks=1 00:19:56.236 00:19:56.236 ' 00:19:56.236 22:39:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:56.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.236 --rc genhtml_branch_coverage=1 00:19:56.236 --rc genhtml_function_coverage=1 00:19:56.236 --rc genhtml_legend=1 00:19:56.236 --rc geninfo_all_blocks=1 00:19:56.236 --rc geninfo_unexecuted_blocks=1 00:19:56.236 00:19:56.236 ' 00:19:56.236 22:39:56 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:56.236 22:39:56 -- nvmf/common.sh@7 -- # uname -s 00:19:56.236 22:39:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.236 22:39:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.236 22:39:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.236 22:39:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.236 22:39:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.236 22:39:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.236 22:39:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.236 22:39:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.236 22:39:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.236 22:39:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.236 22:39:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:19:56.236 22:39:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:19:56.236 22:39:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.236 22:39:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.236 22:39:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:56.236 22:39:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:56.236 22:39:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.236 22:39:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.236 22:39:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.236 22:39:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.237 22:39:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.237 22:39:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.237 22:39:56 -- paths/export.sh@5 -- # export PATH 00:19:56.237 22:39:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.237 22:39:56 -- nvmf/common.sh@46 -- # : 0 00:19:56.237 22:39:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:56.237 22:39:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:56.237 22:39:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:56.237 22:39:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.237 22:39:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.237 22:39:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:56.237 22:39:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:56.237 22:39:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:56.237 22:39:56 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:56.237 22:39:56 -- host/async_init.sh@14 -- # null_block_size=512 00:19:56.237 22:39:56 -- host/async_init.sh@15 -- # null_bdev=null0 00:19:56.237 22:39:56 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:56.237 22:39:56 -- host/async_init.sh@20 -- # uuidgen 00:19:56.237 22:39:56 -- host/async_init.sh@20 -- # tr -d - 00:19:56.237 22:39:56 -- host/async_init.sh@20 -- # nguid=42b915e7e882465eb11387e3592d6283 00:19:56.237 22:39:56 -- host/async_init.sh@22 -- # nvmftestinit 00:19:56.237 22:39:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:56.237 22:39:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.237 22:39:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:56.237 22:39:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:56.237 22:39:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:56.237 22:39:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.237 22:39:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.237 22:39:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.237 22:39:56 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:56.237 22:39:56 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:56.237 22:39:56 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:56.237 22:39:56 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:56.237 22:39:56 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:56.237 22:39:56 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:56.237 22:39:56 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:56.237 22:39:56 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:56.237 22:39:56 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:56.237 22:39:56 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:56.237 22:39:56 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:56.237 22:39:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:56.237 22:39:56 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:56.237 22:39:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:56.237 22:39:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:56.237 22:39:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:56.237 22:39:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:56.237 22:39:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:56.237 22:39:56 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:56.237 22:39:56 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:56.495 Cannot find device "nvmf_tgt_br" 00:19:56.495 22:39:56 -- nvmf/common.sh@154 -- # true 00:19:56.495 22:39:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:56.495 Cannot find device "nvmf_tgt_br2" 00:19:56.495 22:39:56 -- nvmf/common.sh@155 -- # true 00:19:56.495 22:39:56 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:56.495 22:39:56 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:56.495 Cannot find device "nvmf_tgt_br" 00:19:56.495 22:39:56 -- nvmf/common.sh@157 -- # true 00:19:56.495 22:39:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:56.495 Cannot find device "nvmf_tgt_br2" 00:19:56.495 22:39:57 -- nvmf/common.sh@158 -- # true 00:19:56.495 22:39:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:56.495 22:39:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:56.495 22:39:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:56.495 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:56.495 22:39:57 -- nvmf/common.sh@161 -- # true 00:19:56.495 22:39:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:56.495 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:56.495 22:39:57 -- nvmf/common.sh@162 -- # true 00:19:56.495 22:39:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:56.495 22:39:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:56.495 22:39:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:56.495 22:39:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:56.495 22:39:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:56.495 22:39:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:56.496 22:39:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:56.496 22:39:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:56.496 22:39:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:56.496 22:39:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:56.496 22:39:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:56.496 22:39:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:56.496 22:39:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:56.496 22:39:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:56.496 22:39:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:56.496 22:39:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:56.496 22:39:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:56.496 22:39:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:56.496 22:39:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:56.496 22:39:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:56.756 22:39:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:56.756 22:39:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:56.756 22:39:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:56.756 22:39:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:56.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:56.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:19:56.756 00:19:56.756 --- 10.0.0.2 ping statistics --- 00:19:56.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.756 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:19:56.756 22:39:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:56.756 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:56.756 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:19:56.756 00:19:56.756 --- 10.0.0.3 ping statistics --- 00:19:56.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.756 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:56.756 22:39:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:56.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:56.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:19:56.756 00:19:56.756 --- 10.0.0.1 ping statistics --- 00:19:56.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.756 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:19:56.756 22:39:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:56.756 22:39:57 -- nvmf/common.sh@421 -- # return 0 00:19:56.756 22:39:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:56.756 22:39:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:56.756 22:39:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:56.756 22:39:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:56.756 22:39:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:56.756 22:39:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:56.756 22:39:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:56.756 22:39:57 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:19:56.756 22:39:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:56.756 22:39:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:56.756 22:39:57 -- common/autotest_common.sh@10 -- # set +x 00:19:56.756 22:39:57 -- nvmf/common.sh@469 -- # nvmfpid=93057 00:19:56.756 22:39:57 -- nvmf/common.sh@470 -- # waitforlisten 93057 00:19:56.756 22:39:57 -- common/autotest_common.sh@829 -- # '[' -z 93057 ']' 00:19:56.756 22:39:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:56.756 22:39:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.756 22:39:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:56.756 22:39:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.756 22:39:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:56.756 22:39:57 -- common/autotest_common.sh@10 -- # set +x 00:19:56.756 [2024-11-20 22:39:57.352036] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:56.756 [2024-11-20 22:39:57.352140] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.032 [2024-11-20 22:39:57.491619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.032 [2024-11-20 22:39:57.559266] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:57.032 [2024-11-20 22:39:57.559430] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.032 [2024-11-20 22:39:57.559443] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.032 [2024-11-20 22:39:57.559452] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.032 [2024-11-20 22:39:57.559490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.611 22:39:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:57.611 22:39:58 -- common/autotest_common.sh@862 -- # return 0 00:19:57.611 22:39:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:57.611 22:39:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:57.611 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:19:57.611 22:39:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.611 22:39:58 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:19:57.611 22:39:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.611 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:19:57.611 [2024-11-20 22:39:58.312830] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.611 22:39:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.611 22:39:58 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:19:57.611 22:39:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.611 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:19:57.611 null0 00:19:57.611 22:39:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.611 22:39:58 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:19:57.611 22:39:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.611 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:19:57.611 22:39:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.611 22:39:58 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:19:57.611 22:39:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.611 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:19:57.611 22:39:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.611 22:39:58 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 42b915e7e882465eb11387e3592d6283 00:19:57.611 22:39:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.611 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:19:57.870 22:39:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.870 22:39:58 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:57.870 22:39:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.870 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:19:57.870 [2024-11-20 22:39:58.352937] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.870 22:39:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.870 22:39:58 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:19:57.870 22:39:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.870 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:19:57.870 nvme0n1 00:19:57.870 22:39:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.870 22:39:58 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:57.870 22:39:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.870 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:19:58.129 [ 00:19:58.129 { 00:19:58.129 "aliases": [ 00:19:58.129 "42b915e7-e882-465e-b113-87e3592d6283" 00:19:58.129 ], 00:19:58.129 "assigned_rate_limits": { 00:19:58.129 "r_mbytes_per_sec": 0, 00:19:58.129 "rw_ios_per_sec": 0, 00:19:58.129 "rw_mbytes_per_sec": 0, 00:19:58.129 "w_mbytes_per_sec": 0 00:19:58.129 }, 00:19:58.129 "block_size": 512, 00:19:58.129 "claimed": false, 00:19:58.129 "driver_specific": { 00:19:58.129 "mp_policy": "active_passive", 00:19:58.129 "nvme": [ 00:19:58.129 { 00:19:58.129 "ctrlr_data": { 00:19:58.129 "ana_reporting": false, 00:19:58.129 "cntlid": 1, 00:19:58.129 "firmware_revision": "24.01.1", 00:19:58.129 "model_number": "SPDK bdev Controller", 00:19:58.129 "multi_ctrlr": true, 00:19:58.129 "oacs": { 00:19:58.129 "firmware": 0, 00:19:58.129 "format": 0, 00:19:58.129 "ns_manage": 0, 00:19:58.129 "security": 0 00:19:58.129 }, 00:19:58.129 "serial_number": "00000000000000000000", 00:19:58.129 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:58.129 "vendor_id": "0x8086" 00:19:58.129 }, 00:19:58.129 "ns_data": { 00:19:58.129 "can_share": true, 00:19:58.129 "id": 1 00:19:58.129 }, 00:19:58.129 "trid": { 00:19:58.129 "adrfam": "IPv4", 00:19:58.129 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:58.129 "traddr": "10.0.0.2", 00:19:58.129 "trsvcid": "4420", 00:19:58.129 "trtype": "TCP" 00:19:58.129 }, 00:19:58.129 "vs": { 00:19:58.129 "nvme_version": "1.3" 00:19:58.129 } 00:19:58.129 } 00:19:58.129 ] 00:19:58.129 }, 00:19:58.129 "name": "nvme0n1", 00:19:58.129 "num_blocks": 2097152, 00:19:58.129 "product_name": "NVMe disk", 00:19:58.129 "supported_io_types": { 00:19:58.129 "abort": true, 00:19:58.129 "compare": true, 00:19:58.129 "compare_and_write": true, 00:19:58.129 "flush": true, 00:19:58.129 "nvme_admin": true, 00:19:58.129 "nvme_io": true, 00:19:58.129 "read": true, 00:19:58.129 "reset": true, 00:19:58.129 "unmap": false, 00:19:58.129 "write": true, 00:19:58.129 "write_zeroes": true 00:19:58.129 }, 00:19:58.129 "uuid": "42b915e7-e882-465e-b113-87e3592d6283", 00:19:58.129 "zoned": false 00:19:58.129 } 00:19:58.129 ] 00:19:58.129 22:39:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.129 22:39:58 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:58.129 22:39:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.129 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:19:58.129 [2024-11-20 22:39:58.624875] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:58.129 [2024-11-20 22:39:58.624946] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193b1c0 (9): Bad file descriptor 00:19:58.129 [2024-11-20 22:39:58.756388] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:58.129 22:39:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.129 22:39:58 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:58.129 22:39:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.129 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:19:58.129 [ 00:19:58.129 { 00:19:58.129 "aliases": [ 00:19:58.129 "42b915e7-e882-465e-b113-87e3592d6283" 00:19:58.129 ], 00:19:58.129 "assigned_rate_limits": { 00:19:58.129 "r_mbytes_per_sec": 0, 00:19:58.129 "rw_ios_per_sec": 0, 00:19:58.129 "rw_mbytes_per_sec": 0, 00:19:58.129 "w_mbytes_per_sec": 0 00:19:58.129 }, 00:19:58.129 "block_size": 512, 00:19:58.129 "claimed": false, 00:19:58.129 "driver_specific": { 00:19:58.129 "mp_policy": "active_passive", 00:19:58.129 "nvme": [ 00:19:58.129 { 00:19:58.129 "ctrlr_data": { 00:19:58.129 "ana_reporting": false, 00:19:58.129 "cntlid": 2, 00:19:58.129 "firmware_revision": "24.01.1", 00:19:58.129 "model_number": "SPDK bdev Controller", 00:19:58.129 "multi_ctrlr": true, 00:19:58.129 "oacs": { 00:19:58.129 "firmware": 0, 00:19:58.129 "format": 0, 00:19:58.129 "ns_manage": 0, 00:19:58.129 "security": 0 00:19:58.129 }, 00:19:58.129 "serial_number": "00000000000000000000", 00:19:58.129 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:58.129 "vendor_id": "0x8086" 00:19:58.129 }, 00:19:58.129 "ns_data": { 00:19:58.129 "can_share": true, 00:19:58.129 "id": 1 00:19:58.129 }, 00:19:58.129 "trid": { 00:19:58.129 "adrfam": "IPv4", 00:19:58.129 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:58.129 "traddr": "10.0.0.2", 00:19:58.129 "trsvcid": "4420", 00:19:58.129 "trtype": "TCP" 00:19:58.129 }, 00:19:58.129 "vs": { 00:19:58.129 "nvme_version": "1.3" 00:19:58.129 } 00:19:58.129 } 00:19:58.129 ] 00:19:58.129 }, 00:19:58.129 "name": "nvme0n1", 00:19:58.129 "num_blocks": 2097152, 00:19:58.129 "product_name": "NVMe disk", 00:19:58.129 "supported_io_types": { 00:19:58.129 "abort": true, 00:19:58.129 "compare": true, 00:19:58.129 "compare_and_write": true, 00:19:58.129 "flush": true, 00:19:58.129 "nvme_admin": true, 00:19:58.129 "nvme_io": true, 00:19:58.129 "read": true, 00:19:58.129 "reset": true, 00:19:58.129 "unmap": false, 00:19:58.129 "write": true, 00:19:58.129 "write_zeroes": true 00:19:58.129 }, 00:19:58.129 "uuid": "42b915e7-e882-465e-b113-87e3592d6283", 00:19:58.129 "zoned": false 00:19:58.129 } 00:19:58.129 ] 00:19:58.129 22:39:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.129 22:39:58 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.129 22:39:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.129 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:19:58.129 22:39:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.129 22:39:58 -- host/async_init.sh@53 -- # mktemp 00:19:58.129 22:39:58 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.g5csr22QTZ 00:19:58.129 22:39:58 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:58.129 22:39:58 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.g5csr22QTZ 00:19:58.129 22:39:58 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:19:58.129 22:39:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.129 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:19:58.129 22:39:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.129 22:39:58 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:19:58.129 22:39:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.129 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:19:58.129 [2024-11-20 22:39:58.816995] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:58.129 [2024-11-20 22:39:58.817100] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:58.129 22:39:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.129 22:39:58 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.g5csr22QTZ 00:19:58.129 22:39:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.129 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:19:58.129 22:39:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.129 22:39:58 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.g5csr22QTZ 00:19:58.130 22:39:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.130 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:19:58.130 [2024-11-20 22:39:58.837000] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:58.389 nvme0n1 00:19:58.389 22:39:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.389 22:39:58 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:58.389 22:39:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.389 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:19:58.389 [ 00:19:58.389 { 00:19:58.389 "aliases": [ 00:19:58.389 "42b915e7-e882-465e-b113-87e3592d6283" 00:19:58.389 ], 00:19:58.389 "assigned_rate_limits": { 00:19:58.389 "r_mbytes_per_sec": 0, 00:19:58.389 "rw_ios_per_sec": 0, 00:19:58.389 "rw_mbytes_per_sec": 0, 00:19:58.389 "w_mbytes_per_sec": 0 00:19:58.389 }, 00:19:58.389 "block_size": 512, 00:19:58.389 "claimed": false, 00:19:58.389 "driver_specific": { 00:19:58.389 "mp_policy": "active_passive", 00:19:58.389 "nvme": [ 00:19:58.389 { 00:19:58.389 "ctrlr_data": { 00:19:58.389 "ana_reporting": false, 00:19:58.389 "cntlid": 3, 00:19:58.389 "firmware_revision": "24.01.1", 00:19:58.389 "model_number": "SPDK bdev Controller", 00:19:58.389 "multi_ctrlr": true, 00:19:58.389 "oacs": { 00:19:58.389 "firmware": 0, 00:19:58.389 "format": 0, 00:19:58.389 "ns_manage": 0, 00:19:58.389 "security": 0 00:19:58.389 }, 00:19:58.389 "serial_number": "00000000000000000000", 00:19:58.389 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:58.389 "vendor_id": "0x8086" 00:19:58.389 }, 00:19:58.389 "ns_data": { 00:19:58.389 "can_share": true, 00:19:58.389 "id": 1 00:19:58.389 }, 00:19:58.389 "trid": { 00:19:58.389 "adrfam": "IPv4", 00:19:58.389 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:58.389 "traddr": "10.0.0.2", 00:19:58.389 "trsvcid": "4421", 00:19:58.389 "trtype": "TCP" 00:19:58.389 }, 00:19:58.389 "vs": { 00:19:58.389 "nvme_version": "1.3" 00:19:58.389 } 00:19:58.389 } 00:19:58.389 ] 00:19:58.389 }, 00:19:58.389 "name": "nvme0n1", 00:19:58.389 "num_blocks": 2097152, 00:19:58.389 "product_name": "NVMe disk", 00:19:58.389 "supported_io_types": { 00:19:58.389 "abort": true, 00:19:58.389 "compare": true, 00:19:58.389 "compare_and_write": true, 00:19:58.389 "flush": true, 00:19:58.389 "nvme_admin": true, 00:19:58.389 "nvme_io": true, 00:19:58.389 "read": true, 00:19:58.389 "reset": true, 00:19:58.389 "unmap": false, 00:19:58.389 "write": true, 00:19:58.389 "write_zeroes": true 00:19:58.389 }, 00:19:58.389 "uuid": "42b915e7-e882-465e-b113-87e3592d6283", 00:19:58.389 "zoned": false 00:19:58.389 } 00:19:58.389 ] 00:19:58.389 22:39:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.389 22:39:58 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.389 22:39:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.389 22:39:58 -- common/autotest_common.sh@10 -- # set +x 00:19:58.389 22:39:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.389 22:39:58 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.g5csr22QTZ 00:19:58.389 22:39:58 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:19:58.389 22:39:58 -- host/async_init.sh@78 -- # nvmftestfini 00:19:58.389 22:39:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:58.389 22:39:58 -- nvmf/common.sh@116 -- # sync 00:19:58.389 22:39:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:58.389 22:39:59 -- nvmf/common.sh@119 -- # set +e 00:19:58.389 22:39:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:58.389 22:39:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:58.389 rmmod nvme_tcp 00:19:58.389 rmmod nvme_fabrics 00:19:58.389 rmmod nvme_keyring 00:19:58.389 22:39:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:58.389 22:39:59 -- nvmf/common.sh@123 -- # set -e 00:19:58.389 22:39:59 -- nvmf/common.sh@124 -- # return 0 00:19:58.389 22:39:59 -- nvmf/common.sh@477 -- # '[' -n 93057 ']' 00:19:58.389 22:39:59 -- nvmf/common.sh@478 -- # killprocess 93057 00:19:58.389 22:39:59 -- common/autotest_common.sh@936 -- # '[' -z 93057 ']' 00:19:58.389 22:39:59 -- common/autotest_common.sh@940 -- # kill -0 93057 00:19:58.389 22:39:59 -- common/autotest_common.sh@941 -- # uname 00:19:58.389 22:39:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:58.389 22:39:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93057 00:19:58.389 22:39:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:58.389 22:39:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:58.389 killing process with pid 93057 00:19:58.389 22:39:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93057' 00:19:58.390 22:39:59 -- common/autotest_common.sh@955 -- # kill 93057 00:19:58.390 22:39:59 -- common/autotest_common.sh@960 -- # wait 93057 00:19:58.653 22:39:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:58.653 22:39:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:58.653 22:39:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:58.653 22:39:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:58.653 22:39:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:58.653 22:39:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.653 22:39:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.653 22:39:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.653 22:39:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:58.653 ************************************ 00:19:58.653 END TEST nvmf_async_init 00:19:58.653 ************************************ 00:19:58.653 00:19:58.653 real 0m2.652s 00:19:58.653 user 0m2.392s 00:19:58.653 sys 0m0.641s 00:19:58.653 22:39:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:58.653 22:39:59 -- common/autotest_common.sh@10 -- # set +x 00:19:58.913 22:39:59 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:58.913 22:39:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:58.913 22:39:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:58.913 22:39:59 -- common/autotest_common.sh@10 -- # set +x 00:19:58.913 ************************************ 00:19:58.913 START TEST dma 00:19:58.913 ************************************ 00:19:58.913 22:39:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:58.913 * Looking for test storage... 00:19:58.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:58.913 22:39:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:58.913 22:39:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:58.913 22:39:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:58.913 22:39:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:58.913 22:39:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:58.913 22:39:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:58.913 22:39:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:58.913 22:39:59 -- scripts/common.sh@335 -- # IFS=.-: 00:19:58.913 22:39:59 -- scripts/common.sh@335 -- # read -ra ver1 00:19:58.913 22:39:59 -- scripts/common.sh@336 -- # IFS=.-: 00:19:58.913 22:39:59 -- scripts/common.sh@336 -- # read -ra ver2 00:19:58.913 22:39:59 -- scripts/common.sh@337 -- # local 'op=<' 00:19:58.913 22:39:59 -- scripts/common.sh@339 -- # ver1_l=2 00:19:58.913 22:39:59 -- scripts/common.sh@340 -- # ver2_l=1 00:19:58.913 22:39:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:58.913 22:39:59 -- scripts/common.sh@343 -- # case "$op" in 00:19:58.913 22:39:59 -- scripts/common.sh@344 -- # : 1 00:19:58.913 22:39:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:58.913 22:39:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:58.913 22:39:59 -- scripts/common.sh@364 -- # decimal 1 00:19:58.913 22:39:59 -- scripts/common.sh@352 -- # local d=1 00:19:58.913 22:39:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:58.913 22:39:59 -- scripts/common.sh@354 -- # echo 1 00:19:58.913 22:39:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:58.913 22:39:59 -- scripts/common.sh@365 -- # decimal 2 00:19:58.913 22:39:59 -- scripts/common.sh@352 -- # local d=2 00:19:58.913 22:39:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:58.913 22:39:59 -- scripts/common.sh@354 -- # echo 2 00:19:58.913 22:39:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:58.913 22:39:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:58.913 22:39:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:58.913 22:39:59 -- scripts/common.sh@367 -- # return 0 00:19:58.913 22:39:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:58.913 22:39:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:58.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.913 --rc genhtml_branch_coverage=1 00:19:58.913 --rc genhtml_function_coverage=1 00:19:58.913 --rc genhtml_legend=1 00:19:58.913 --rc geninfo_all_blocks=1 00:19:58.913 --rc geninfo_unexecuted_blocks=1 00:19:58.913 00:19:58.913 ' 00:19:58.913 22:39:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:58.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.913 --rc genhtml_branch_coverage=1 00:19:58.913 --rc genhtml_function_coverage=1 00:19:58.913 --rc genhtml_legend=1 00:19:58.913 --rc geninfo_all_blocks=1 00:19:58.913 --rc geninfo_unexecuted_blocks=1 00:19:58.913 00:19:58.913 ' 00:19:58.913 22:39:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:58.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.913 --rc genhtml_branch_coverage=1 00:19:58.913 --rc genhtml_function_coverage=1 00:19:58.913 --rc genhtml_legend=1 00:19:58.913 --rc geninfo_all_blocks=1 00:19:58.913 --rc geninfo_unexecuted_blocks=1 00:19:58.913 00:19:58.913 ' 00:19:58.913 22:39:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:58.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.914 --rc genhtml_branch_coverage=1 00:19:58.914 --rc genhtml_function_coverage=1 00:19:58.914 --rc genhtml_legend=1 00:19:58.914 --rc geninfo_all_blocks=1 00:19:58.914 --rc geninfo_unexecuted_blocks=1 00:19:58.914 00:19:58.914 ' 00:19:58.914 22:39:59 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:58.914 22:39:59 -- nvmf/common.sh@7 -- # uname -s 00:19:58.914 22:39:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:58.914 22:39:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:58.914 22:39:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:58.914 22:39:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:58.914 22:39:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:58.914 22:39:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:58.914 22:39:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:58.914 22:39:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:58.914 22:39:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:58.914 22:39:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:58.914 22:39:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:19:58.914 22:39:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:19:58.914 22:39:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:58.914 22:39:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:58.914 22:39:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:58.914 22:39:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:58.914 22:39:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:58.914 22:39:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:58.914 22:39:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:58.914 22:39:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.914 22:39:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.914 22:39:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.914 22:39:59 -- paths/export.sh@5 -- # export PATH 00:19:58.914 22:39:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.914 22:39:59 -- nvmf/common.sh@46 -- # : 0 00:19:58.914 22:39:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:58.914 22:39:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:58.914 22:39:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:58.914 22:39:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:58.914 22:39:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:58.914 22:39:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:58.914 22:39:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:58.914 22:39:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:58.914 22:39:59 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:19:58.914 22:39:59 -- host/dma.sh@13 -- # exit 0 00:19:58.914 00:19:58.914 real 0m0.206s 00:19:58.914 user 0m0.125s 00:19:58.914 sys 0m0.093s 00:19:58.914 22:39:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:58.914 22:39:59 -- common/autotest_common.sh@10 -- # set +x 00:19:58.914 ************************************ 00:19:58.914 END TEST dma 00:19:58.914 ************************************ 00:19:59.173 22:39:59 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:59.173 22:39:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:59.173 22:39:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:59.173 22:39:59 -- common/autotest_common.sh@10 -- # set +x 00:19:59.173 ************************************ 00:19:59.173 START TEST nvmf_identify 00:19:59.173 ************************************ 00:19:59.173 22:39:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:59.173 * Looking for test storage... 00:19:59.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:59.173 22:39:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:59.173 22:39:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:59.173 22:39:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:59.173 22:39:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:59.173 22:39:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:59.173 22:39:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:59.173 22:39:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:59.173 22:39:59 -- scripts/common.sh@335 -- # IFS=.-: 00:19:59.173 22:39:59 -- scripts/common.sh@335 -- # read -ra ver1 00:19:59.173 22:39:59 -- scripts/common.sh@336 -- # IFS=.-: 00:19:59.173 22:39:59 -- scripts/common.sh@336 -- # read -ra ver2 00:19:59.173 22:39:59 -- scripts/common.sh@337 -- # local 'op=<' 00:19:59.173 22:39:59 -- scripts/common.sh@339 -- # ver1_l=2 00:19:59.173 22:39:59 -- scripts/common.sh@340 -- # ver2_l=1 00:19:59.173 22:39:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:59.173 22:39:59 -- scripts/common.sh@343 -- # case "$op" in 00:19:59.173 22:39:59 -- scripts/common.sh@344 -- # : 1 00:19:59.173 22:39:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:59.173 22:39:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:59.173 22:39:59 -- scripts/common.sh@364 -- # decimal 1 00:19:59.173 22:39:59 -- scripts/common.sh@352 -- # local d=1 00:19:59.173 22:39:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:59.173 22:39:59 -- scripts/common.sh@354 -- # echo 1 00:19:59.174 22:39:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:59.174 22:39:59 -- scripts/common.sh@365 -- # decimal 2 00:19:59.174 22:39:59 -- scripts/common.sh@352 -- # local d=2 00:19:59.174 22:39:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:59.174 22:39:59 -- scripts/common.sh@354 -- # echo 2 00:19:59.174 22:39:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:59.174 22:39:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:59.174 22:39:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:59.174 22:39:59 -- scripts/common.sh@367 -- # return 0 00:19:59.174 22:39:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:59.174 22:39:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:59.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.174 --rc genhtml_branch_coverage=1 00:19:59.174 --rc genhtml_function_coverage=1 00:19:59.174 --rc genhtml_legend=1 00:19:59.174 --rc geninfo_all_blocks=1 00:19:59.174 --rc geninfo_unexecuted_blocks=1 00:19:59.174 00:19:59.174 ' 00:19:59.174 22:39:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:59.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.174 --rc genhtml_branch_coverage=1 00:19:59.174 --rc genhtml_function_coverage=1 00:19:59.174 --rc genhtml_legend=1 00:19:59.174 --rc geninfo_all_blocks=1 00:19:59.174 --rc geninfo_unexecuted_blocks=1 00:19:59.174 00:19:59.174 ' 00:19:59.174 22:39:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:59.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.174 --rc genhtml_branch_coverage=1 00:19:59.174 --rc genhtml_function_coverage=1 00:19:59.174 --rc genhtml_legend=1 00:19:59.174 --rc geninfo_all_blocks=1 00:19:59.174 --rc geninfo_unexecuted_blocks=1 00:19:59.174 00:19:59.174 ' 00:19:59.174 22:39:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:59.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.174 --rc genhtml_branch_coverage=1 00:19:59.174 --rc genhtml_function_coverage=1 00:19:59.174 --rc genhtml_legend=1 00:19:59.174 --rc geninfo_all_blocks=1 00:19:59.174 --rc geninfo_unexecuted_blocks=1 00:19:59.174 00:19:59.174 ' 00:19:59.174 22:39:59 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:59.174 22:39:59 -- nvmf/common.sh@7 -- # uname -s 00:19:59.174 22:39:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.174 22:39:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.174 22:39:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.174 22:39:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.174 22:39:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.174 22:39:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.174 22:39:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.174 22:39:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.174 22:39:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.174 22:39:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.174 22:39:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:19:59.174 22:39:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:19:59.174 22:39:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.174 22:39:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.174 22:39:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:59.174 22:39:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:59.174 22:39:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.174 22:39:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.174 22:39:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.174 22:39:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.174 22:39:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.174 22:39:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.174 22:39:59 -- paths/export.sh@5 -- # export PATH 00:19:59.174 22:39:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.174 22:39:59 -- nvmf/common.sh@46 -- # : 0 00:19:59.174 22:39:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:59.174 22:39:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:59.174 22:39:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:59.174 22:39:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.432 22:39:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.432 22:39:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:59.432 22:39:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:59.432 22:39:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:59.432 22:39:59 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:59.433 22:39:59 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:59.433 22:39:59 -- host/identify.sh@14 -- # nvmftestinit 00:19:59.433 22:39:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:59.433 22:39:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.433 22:39:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:59.433 22:39:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:59.433 22:39:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:59.433 22:39:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.433 22:39:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.433 22:39:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.433 22:39:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:59.433 22:39:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:59.433 22:39:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:59.433 22:39:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:59.433 22:39:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:59.433 22:39:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:59.433 22:39:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.433 22:39:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.433 22:39:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:59.433 22:39:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:59.433 22:39:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:59.433 22:39:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:59.433 22:39:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:59.433 22:39:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.433 22:39:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:59.433 22:39:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:59.433 22:39:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:59.433 22:39:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:59.433 22:39:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:59.433 22:39:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:59.433 Cannot find device "nvmf_tgt_br" 00:19:59.433 22:39:59 -- nvmf/common.sh@154 -- # true 00:19:59.433 22:39:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:59.433 Cannot find device "nvmf_tgt_br2" 00:19:59.433 22:39:59 -- nvmf/common.sh@155 -- # true 00:19:59.433 22:39:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:59.433 22:39:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:59.433 Cannot find device "nvmf_tgt_br" 00:19:59.433 22:39:59 -- nvmf/common.sh@157 -- # true 00:19:59.433 22:39:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:59.433 Cannot find device "nvmf_tgt_br2" 00:19:59.433 22:39:59 -- nvmf/common.sh@158 -- # true 00:19:59.433 22:39:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:59.433 22:40:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:59.433 22:40:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:59.433 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:59.433 22:40:00 -- nvmf/common.sh@161 -- # true 00:19:59.433 22:40:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:59.433 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:59.433 22:40:00 -- nvmf/common.sh@162 -- # true 00:19:59.433 22:40:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:59.433 22:40:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:59.433 22:40:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:59.433 22:40:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:59.433 22:40:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:59.433 22:40:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:59.433 22:40:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:59.433 22:40:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:59.691 22:40:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:59.691 22:40:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:59.691 22:40:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:59.691 22:40:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:59.691 22:40:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:59.691 22:40:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:59.691 22:40:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:59.691 22:40:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:59.691 22:40:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:59.691 22:40:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:59.691 22:40:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:59.691 22:40:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:59.691 22:40:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:59.691 22:40:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:59.691 22:40:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:59.691 22:40:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:59.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:19:59.691 00:19:59.691 --- 10.0.0.2 ping statistics --- 00:19:59.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.691 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:19:59.691 22:40:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:59.691 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:59.691 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:19:59.691 00:19:59.691 --- 10.0.0.3 ping statistics --- 00:19:59.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.691 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:19:59.692 22:40:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:59.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:19:59.692 00:19:59.692 --- 10.0.0.1 ping statistics --- 00:19:59.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.692 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:19:59.692 22:40:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.692 22:40:00 -- nvmf/common.sh@421 -- # return 0 00:19:59.692 22:40:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:59.692 22:40:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.692 22:40:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:59.692 22:40:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:59.692 22:40:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.692 22:40:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:59.692 22:40:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:59.692 22:40:00 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:59.692 22:40:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:59.692 22:40:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.692 22:40:00 -- host/identify.sh@19 -- # nvmfpid=93336 00:19:59.692 22:40:00 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:59.692 22:40:00 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:59.692 22:40:00 -- host/identify.sh@23 -- # waitforlisten 93336 00:19:59.692 22:40:00 -- common/autotest_common.sh@829 -- # '[' -z 93336 ']' 00:19:59.692 22:40:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.692 22:40:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:59.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.692 22:40:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.692 22:40:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:59.692 22:40:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.692 [2024-11-20 22:40:00.369373] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:59.692 [2024-11-20 22:40:00.369972] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.949 [2024-11-20 22:40:00.512114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:59.949 [2024-11-20 22:40:00.606223] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:59.949 [2024-11-20 22:40:00.606431] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.949 [2024-11-20 22:40:00.606451] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.949 [2024-11-20 22:40:00.606463] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.949 [2024-11-20 22:40:00.606575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.949 [2024-11-20 22:40:00.607294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.949 [2024-11-20 22:40:00.607500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:59.949 [2024-11-20 22:40:00.607511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.886 22:40:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:00.886 22:40:01 -- common/autotest_common.sh@862 -- # return 0 00:20:00.886 22:40:01 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:00.886 22:40:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.886 22:40:01 -- common/autotest_common.sh@10 -- # set +x 00:20:00.886 [2024-11-20 22:40:01.381043] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.886 22:40:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.886 22:40:01 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:00.886 22:40:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:00.886 22:40:01 -- common/autotest_common.sh@10 -- # set +x 00:20:00.886 22:40:01 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:00.886 22:40:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.886 22:40:01 -- common/autotest_common.sh@10 -- # set +x 00:20:00.886 Malloc0 00:20:00.886 22:40:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.886 22:40:01 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:00.886 22:40:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.886 22:40:01 -- common/autotest_common.sh@10 -- # set +x 00:20:00.886 22:40:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.886 22:40:01 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:00.886 22:40:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.886 22:40:01 -- common/autotest_common.sh@10 -- # set +x 00:20:00.886 22:40:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.886 22:40:01 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:00.886 22:40:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.886 22:40:01 -- common/autotest_common.sh@10 -- # set +x 00:20:00.886 [2024-11-20 22:40:01.496354] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.886 22:40:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.886 22:40:01 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:00.886 22:40:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.886 22:40:01 -- common/autotest_common.sh@10 -- # set +x 00:20:00.886 22:40:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.886 22:40:01 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:00.886 22:40:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.886 22:40:01 -- common/autotest_common.sh@10 -- # set +x 00:20:00.886 [2024-11-20 22:40:01.512025] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:00.886 [ 00:20:00.886 { 00:20:00.886 "allow_any_host": true, 00:20:00.886 "hosts": [], 00:20:00.886 "listen_addresses": [ 00:20:00.886 { 00:20:00.886 "adrfam": "IPv4", 00:20:00.886 "traddr": "10.0.0.2", 00:20:00.886 "transport": "TCP", 00:20:00.886 "trsvcid": "4420", 00:20:00.886 "trtype": "TCP" 00:20:00.886 } 00:20:00.886 ], 00:20:00.886 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:00.886 "subtype": "Discovery" 00:20:00.886 }, 00:20:00.886 { 00:20:00.886 "allow_any_host": true, 00:20:00.886 "hosts": [], 00:20:00.886 "listen_addresses": [ 00:20:00.886 { 00:20:00.886 "adrfam": "IPv4", 00:20:00.886 "traddr": "10.0.0.2", 00:20:00.886 "transport": "TCP", 00:20:00.886 "trsvcid": "4420", 00:20:00.886 "trtype": "TCP" 00:20:00.886 } 00:20:00.886 ], 00:20:00.886 "max_cntlid": 65519, 00:20:00.886 "max_namespaces": 32, 00:20:00.886 "min_cntlid": 1, 00:20:00.886 "model_number": "SPDK bdev Controller", 00:20:00.886 "namespaces": [ 00:20:00.886 { 00:20:00.886 "bdev_name": "Malloc0", 00:20:00.886 "eui64": "ABCDEF0123456789", 00:20:00.886 "name": "Malloc0", 00:20:00.886 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:00.886 "nsid": 1, 00:20:00.886 "uuid": "73c92336-a2f1-4a7d-bf33-65ba781495bf" 00:20:00.886 } 00:20:00.886 ], 00:20:00.886 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.886 "serial_number": "SPDK00000000000001", 00:20:00.886 "subtype": "NVMe" 00:20:00.886 } 00:20:00.886 ] 00:20:00.886 22:40:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.886 22:40:01 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:00.886 [2024-11-20 22:40:01.546114] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:00.886 [2024-11-20 22:40:01.546167] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93389 ] 00:20:01.148 [2024-11-20 22:40:01.677616] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:01.148 [2024-11-20 22:40:01.677676] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:01.148 [2024-11-20 22:40:01.677682] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:01.148 [2024-11-20 22:40:01.677700] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:01.148 [2024-11-20 22:40:01.677710] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:01.148 [2024-11-20 22:40:01.677903] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:01.148 [2024-11-20 22:40:01.677984] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x131d540 0 00:20:01.148 [2024-11-20 22:40:01.691342] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:01.148 [2024-11-20 22:40:01.691365] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:01.148 [2024-11-20 22:40:01.691379] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:01.148 [2024-11-20 22:40:01.691383] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:01.148 [2024-11-20 22:40:01.691436] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.148 [2024-11-20 22:40:01.691444] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.148 [2024-11-20 22:40:01.691447] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131d540) 00:20:01.148 [2024-11-20 22:40:01.691464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:01.148 [2024-11-20 22:40:01.691505] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356220, cid 0, qid 0 00:20:01.148 [2024-11-20 22:40:01.698894] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.148 [2024-11-20 22:40:01.698908] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.148 [2024-11-20 22:40:01.698912] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.148 [2024-11-20 22:40:01.698924] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1356220) on tqpair=0x131d540 00:20:01.148 [2024-11-20 22:40:01.698944] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:01.148 [2024-11-20 22:40:01.698951] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:01.148 [2024-11-20 22:40:01.698957] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:01.148 [2024-11-20 22:40:01.698972] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.148 [2024-11-20 22:40:01.698977] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.148 [2024-11-20 22:40:01.698980] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131d540) 00:20:01.148 [2024-11-20 22:40:01.698988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.148 [2024-11-20 22:40:01.699016] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356220, cid 0, qid 0 00:20:01.148 [2024-11-20 22:40:01.699123] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.148 [2024-11-20 22:40:01.699129] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.148 [2024-11-20 22:40:01.699143] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.148 [2024-11-20 22:40:01.699146] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1356220) on tqpair=0x131d540 00:20:01.148 [2024-11-20 22:40:01.699152] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:01.148 [2024-11-20 22:40:01.699159] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:01.148 [2024-11-20 22:40:01.699168] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.148 [2024-11-20 22:40:01.699172] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.148 [2024-11-20 22:40:01.699175] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131d540) 00:20:01.148 [2024-11-20 22:40:01.699182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.148 [2024-11-20 22:40:01.699201] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356220, cid 0, qid 0 00:20:01.149 [2024-11-20 22:40:01.699322] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.149 [2024-11-20 22:40:01.699330] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.149 [2024-11-20 22:40:01.699333] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.149 [2024-11-20 22:40:01.699337] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1356220) on tqpair=0x131d540 00:20:01.149 [2024-11-20 22:40:01.699343] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:01.149 [2024-11-20 22:40:01.699351] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:01.149 [2024-11-20 22:40:01.699358] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.149 [2024-11-20 22:40:01.699362] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.149 [2024-11-20 22:40:01.699366] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131d540) 00:20:01.149 [2024-11-20 22:40:01.699373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.149 [2024-11-20 22:40:01.699393] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356220, cid 0, qid 0 00:20:01.149 [2024-11-20 22:40:01.699465] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.149 [2024-11-20 22:40:01.699471] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.149 [2024-11-20 22:40:01.699476] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.149 [2024-11-20 22:40:01.699480] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1356220) on tqpair=0x131d540 00:20:01.149 [2024-11-20 22:40:01.699486] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:01.149 [2024-11-20 22:40:01.699495] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.149 [2024-11-20 22:40:01.699500] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.149 [2024-11-20 22:40:01.699503] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131d540) 00:20:01.149 [2024-11-20 22:40:01.699510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.149 [2024-11-20 22:40:01.699527] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356220, cid 0, qid 0 00:20:01.149 [2024-11-20 22:40:01.699600] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.149 [2024-11-20 22:40:01.699606] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.149 [2024-11-20 22:40:01.699610] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.149 [2024-11-20 22:40:01.699613] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1356220) on tqpair=0x131d540 00:20:01.149 [2024-11-20 22:40:01.699619] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:01.149 [2024-11-20 22:40:01.699624] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:01.149 [2024-11-20 22:40:01.699631] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:01.149 [2024-11-20 22:40:01.699736] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:01.149 [2024-11-20 22:40:01.699741] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:01.149 [2024-11-20 22:40:01.699750] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.149 [2024-11-20 22:40:01.699753] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.149 [2024-11-20 22:40:01.699757] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131d540) 00:20:01.149 [2024-11-20 22:40:01.699763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.149 [2024-11-20 22:40:01.699780] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356220, cid 0, qid 0 00:20:01.149 [2024-11-20 22:40:01.699847] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.149 [2024-11-20 22:40:01.699853] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.149 [2024-11-20 22:40:01.699856] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.149 [2024-11-20 22:40:01.699860] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1356220) on tqpair=0x131d540 00:20:01.149 [2024-11-20 22:40:01.699865] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:01.149 [2024-11-20 22:40:01.699873] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.149 [2024-11-20 22:40:01.699877] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.149 [2024-11-20 22:40:01.699880] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131d540) 00:20:01.149 [2024-11-20 22:40:01.699887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.149 [2024-11-20 22:40:01.699903] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356220, cid 0, qid 0 00:20:01.149 [2024-11-20 22:40:01.699966] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.149 [2024-11-20 22:40:01.699971] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.149 [2024-11-20 22:40:01.699975] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.149 [2024-11-20 22:40:01.699978] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1356220) on tqpair=0x131d540 00:20:01.149 [2024-11-20 22:40:01.699983] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:01.149 [2024-11-20 22:40:01.699988] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:01.149 [2024-11-20 22:40:01.699995] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:01.149 [2024-11-20 22:40:01.700011] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:01.149 [2024-11-20 22:40:01.700020] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.149 [2024-11-20 22:40:01.700024] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.149 [2024-11-20 22:40:01.700027] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131d540) 00:20:01.149 [2024-11-20 22:40:01.700034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.149 [2024-11-20 22:40:01.700051] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356220, cid 0, qid 0 00:20:01.149 [2024-11-20 22:40:01.700163] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:01.149 [2024-11-20 22:40:01.700174] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:01.149 [2024-11-20 22:40:01.700178] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:01.149 [2024-11-20 22:40:01.700182] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x131d540): datao=0, datal=4096, cccid=0 00:20:01.149 [2024-11-20 22:40:01.700187] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1356220) on tqpair(0x131d540): expected_datao=0, payload_size=4096 00:20:01.149 [2024-11-20 22:40:01.700195] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:01.149 [2024-11-20 22:40:01.700199] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:01.149 [2024-11-20 22:40:01.700209] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.149 [2024-11-20 22:40:01.700214] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.149 [2024-11-20 22:40:01.700217] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.149 [2024-11-20 22:40:01.700221] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1356220) on tqpair=0x131d540 00:20:01.149 [2024-11-20 22:40:01.700229] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:01.149 [2024-11-20 22:40:01.700234] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:01.149 [2024-11-20 22:40:01.700238] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:01.149 [2024-11-20 22:40:01.700243] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:01.149 [2024-11-20 22:40:01.700248] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:01.149 [2024-11-20 22:40:01.700252] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:01.149 [2024-11-20 22:40:01.700264] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:01.149 [2024-11-20 22:40:01.700272] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.149 [2024-11-20 22:40:01.700301] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.149 [2024-11-20 22:40:01.700305] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131d540) 00:20:01.150 [2024-11-20 22:40:01.700313] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:01.150 [2024-11-20 22:40:01.700333] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356220, cid 0, qid 0 00:20:01.150 [2024-11-20 22:40:01.700427] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.150 [2024-11-20 22:40:01.700433] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.150 [2024-11-20 22:40:01.700437] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.700440] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1356220) on tqpair=0x131d540 00:20:01.150 [2024-11-20 22:40:01.700449] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.700453] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.700456] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131d540) 00:20:01.150 [2024-11-20 22:40:01.700462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:01.150 [2024-11-20 22:40:01.700468] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.700471] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.700475] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x131d540) 00:20:01.150 [2024-11-20 22:40:01.700480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:01.150 [2024-11-20 22:40:01.700485] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.700489] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.700493] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x131d540) 00:20:01.150 [2024-11-20 22:40:01.700498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:01.150 [2024-11-20 22:40:01.700503] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.700507] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.700510] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131d540) 00:20:01.150 [2024-11-20 22:40:01.700515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:01.150 [2024-11-20 22:40:01.700519] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:01.150 [2024-11-20 22:40:01.700533] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:01.150 [2024-11-20 22:40:01.700539] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.700543] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.700546] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x131d540) 00:20:01.150 [2024-11-20 22:40:01.700552] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.150 [2024-11-20 22:40:01.700573] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356220, cid 0, qid 0 00:20:01.150 [2024-11-20 22:40:01.700580] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356380, cid 1, qid 0 00:20:01.150 [2024-11-20 22:40:01.700584] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13564e0, cid 2, qid 0 00:20:01.150 [2024-11-20 22:40:01.700588] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356640, cid 3, qid 0 00:20:01.150 [2024-11-20 22:40:01.700593] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13567a0, cid 4, qid 0 00:20:01.150 [2024-11-20 22:40:01.700717] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.150 [2024-11-20 22:40:01.700723] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.150 [2024-11-20 22:40:01.700726] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.700730] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13567a0) on tqpair=0x131d540 00:20:01.150 [2024-11-20 22:40:01.700736] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:01.150 [2024-11-20 22:40:01.700741] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:01.150 [2024-11-20 22:40:01.700750] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.700754] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.700757] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x131d540) 00:20:01.150 [2024-11-20 22:40:01.700763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.150 [2024-11-20 22:40:01.700781] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13567a0, cid 4, qid 0 00:20:01.150 [2024-11-20 22:40:01.700852] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:01.150 [2024-11-20 22:40:01.700858] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:01.150 [2024-11-20 22:40:01.700861] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.700865] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x131d540): datao=0, datal=4096, cccid=4 00:20:01.150 [2024-11-20 22:40:01.700869] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13567a0) on tqpair(0x131d540): expected_datao=0, payload_size=4096 00:20:01.150 [2024-11-20 22:40:01.700875] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.700879] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.700886] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.150 [2024-11-20 22:40:01.700891] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.150 [2024-11-20 22:40:01.700894] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.700897] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13567a0) on tqpair=0x131d540 00:20:01.150 [2024-11-20 22:40:01.700909] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:01.150 [2024-11-20 22:40:01.700943] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.700949] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.700952] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x131d540) 00:20:01.150 [2024-11-20 22:40:01.700958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.150 [2024-11-20 22:40:01.700965] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.700969] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.700972] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x131d540) 00:20:01.150 [2024-11-20 22:40:01.700977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:01.150 [2024-11-20 22:40:01.701001] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13567a0, cid 4, qid 0 00:20:01.150 [2024-11-20 22:40:01.701007] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356900, cid 5, qid 0 00:20:01.150 [2024-11-20 22:40:01.701117] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:01.150 [2024-11-20 22:40:01.701123] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:01.150 [2024-11-20 22:40:01.701126] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.701129] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x131d540): datao=0, datal=1024, cccid=4 00:20:01.150 [2024-11-20 22:40:01.701133] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13567a0) on tqpair(0x131d540): expected_datao=0, payload_size=1024 00:20:01.150 [2024-11-20 22:40:01.701140] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.701143] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.701148] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.150 [2024-11-20 22:40:01.701153] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.150 [2024-11-20 22:40:01.701156] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.701159] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1356900) on tqpair=0x131d540 00:20:01.150 [2024-11-20 22:40:01.745356] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.150 [2024-11-20 22:40:01.745374] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.150 [2024-11-20 22:40:01.745379] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.745382] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13567a0) on tqpair=0x131d540 00:20:01.150 [2024-11-20 22:40:01.745398] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.745403] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.150 [2024-11-20 22:40:01.745407] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x131d540) 00:20:01.151 [2024-11-20 22:40:01.745414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.151 [2024-11-20 22:40:01.745443] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13567a0, cid 4, qid 0 00:20:01.151 [2024-11-20 22:40:01.745546] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:01.151 [2024-11-20 22:40:01.745559] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:01.151 [2024-11-20 22:40:01.745563] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:01.151 [2024-11-20 22:40:01.745566] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x131d540): datao=0, datal=3072, cccid=4 00:20:01.151 [2024-11-20 22:40:01.745570] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13567a0) on tqpair(0x131d540): expected_datao=0, payload_size=3072 00:20:01.151 [2024-11-20 22:40:01.745577] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:01.151 [2024-11-20 22:40:01.745580] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:01.151 [2024-11-20 22:40:01.745596] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.151 [2024-11-20 22:40:01.745601] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.151 [2024-11-20 22:40:01.745604] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.151 [2024-11-20 22:40:01.745608] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13567a0) on tqpair=0x131d540 00:20:01.151 [2024-11-20 22:40:01.745617] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.151 [2024-11-20 22:40:01.745621] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.151 [2024-11-20 22:40:01.745624] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x131d540) 00:20:01.151 [2024-11-20 22:40:01.745631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.151 [2024-11-20 22:40:01.745654] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13567a0, cid 4, qid 0 00:20:01.151 [2024-11-20 22:40:01.745772] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:01.151 [2024-11-20 22:40:01.745777] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:01.151 [2024-11-20 22:40:01.745781] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:01.151 [2024-11-20 22:40:01.745784] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x131d540): datao=0, datal=8, cccid=4 00:20:01.151 [2024-11-20 22:40:01.745788] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13567a0) on tqpair(0x131d540): expected_datao=0, payload_size=8 00:20:01.151 [2024-11-20 22:40:01.745794] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:01.151 [2024-11-20 22:40:01.745797] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:01.151 ===================================================== 00:20:01.151 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:01.151 ===================================================== 00:20:01.151 Controller Capabilities/Features 00:20:01.151 ================================ 00:20:01.151 Vendor ID: 0000 00:20:01.151 Subsystem Vendor ID: 0000 00:20:01.151 Serial Number: .................... 00:20:01.151 Model Number: ........................................ 00:20:01.151 Firmware Version: 24.01.1 00:20:01.151 Recommended Arb Burst: 0 00:20:01.151 IEEE OUI Identifier: 00 00 00 00:20:01.151 Multi-path I/O 00:20:01.151 May have multiple subsystem ports: No 00:20:01.151 May have multiple controllers: No 00:20:01.151 Associated with SR-IOV VF: No 00:20:01.151 Max Data Transfer Size: 131072 00:20:01.151 Max Number of Namespaces: 0 00:20:01.151 Max Number of I/O Queues: 1024 00:20:01.151 NVMe Specification Version (VS): 1.3 00:20:01.151 NVMe Specification Version (Identify): 1.3 00:20:01.151 Maximum Queue Entries: 128 00:20:01.151 Contiguous Queues Required: Yes 00:20:01.151 Arbitration Mechanisms Supported 00:20:01.151 Weighted Round Robin: Not Supported 00:20:01.151 Vendor Specific: Not Supported 00:20:01.151 Reset Timeout: 15000 ms 00:20:01.151 Doorbell Stride: 4 bytes 00:20:01.151 NVM Subsystem Reset: Not Supported 00:20:01.151 Command Sets Supported 00:20:01.151 NVM Command Set: Supported 00:20:01.151 Boot Partition: Not Supported 00:20:01.151 Memory Page Size Minimum: 4096 bytes 00:20:01.151 Memory Page Size Maximum: 4096 bytes 00:20:01.151 Persistent Memory Region: Not Supported 00:20:01.151 Optional Asynchronous Events Supported 00:20:01.151 Namespace Attribute Notices: Not Supported 00:20:01.151 Firmware Activation Notices: Not Supported 00:20:01.151 ANA Change Notices: Not Supported 00:20:01.151 PLE Aggregate Log Change Notices: Not Supported 00:20:01.151 LBA Status Info Alert Notices: Not Supported 00:20:01.151 EGE Aggregate Log Change Notices: Not Supported 00:20:01.151 Normal NVM Subsystem Shutdown event: Not Supported 00:20:01.151 Zone Descriptor Change Notices: Not Supported 00:20:01.151 Discovery Log Change Notices: Supported 00:20:01.151 Controller Attributes 00:20:01.151 128-bit Host Identifier: Not Supported 00:20:01.151 Non-Operational Permissive Mode: Not Supported 00:20:01.151 NVM Sets: Not Supported 00:20:01.151 Read Recovery Levels: Not Supported 00:20:01.151 Endurance Groups: Not Supported 00:20:01.151 Predictable Latency Mode: Not Supported 00:20:01.151 Traffic Based Keep ALive: Not Supported 00:20:01.151 Namespace Granularity: Not Supported 00:20:01.151 SQ Associations: Not Supported 00:20:01.151 UUID List: Not Supported 00:20:01.151 Multi-Domain Subsystem: Not Supported 00:20:01.151 Fixed Capacity Management: Not Supported 00:20:01.151 Variable Capacity Management: Not Supported 00:20:01.151 Delete Endurance Group: Not Supported 00:20:01.151 Delete NVM Set: Not Supported 00:20:01.151 Extended LBA Formats Supported: Not Supported 00:20:01.151 Flexible Data Placement Supported: Not Supported 00:20:01.151 00:20:01.151 Controller Memory Buffer Support 00:20:01.151 ================================ 00:20:01.151 Supported: No 00:20:01.151 00:20:01.151 Persistent Memory Region Support 00:20:01.151 ================================ 00:20:01.151 Supported: No 00:20:01.151 00:20:01.151 Admin Command Set Attributes 00:20:01.151 ============================ 00:20:01.151 Security Send/Receive: Not Supported 00:20:01.151 Format NVM: Not Supported 00:20:01.151 Firmware Activate/Download: Not Supported 00:20:01.151 Namespace Management: Not Supported 00:20:01.151 Device Self-Test: Not Supported 00:20:01.151 Directives: Not Supported 00:20:01.151 NVMe-MI: Not Supported 00:20:01.151 Virtualization Management: Not Supported 00:20:01.151 Doorbell Buffer Config: Not Supported 00:20:01.151 Get LBA Status Capability: Not Supported 00:20:01.151 Command & Feature Lockdown Capability: Not Supported 00:20:01.151 Abort Command Limit: 1 00:20:01.151 Async Event Request Limit: 4 00:20:01.151 Number of Firmware Slots: N/A 00:20:01.151 Firmware Slot 1 Read-Only: N/A 00:20:01.151 Fi[2024-11-20 22:40:01.786380] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.151 [2024-11-20 22:40:01.786398] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.151 [2024-11-20 22:40:01.786402] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.151 [2024-11-20 22:40:01.786417] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13567a0) on tqpair=0x131d540 00:20:01.151 rmware Activation Without Reset: N/A 00:20:01.151 Multiple Update Detection Support: N/A 00:20:01.151 Firmware Update Granularity: No Information Provided 00:20:01.151 Per-Namespace SMART Log: No 00:20:01.151 Asymmetric Namespace Access Log Page: Not Supported 00:20:01.151 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:01.151 Command Effects Log Page: Not Supported 00:20:01.151 Get Log Page Extended Data: Supported 00:20:01.151 Telemetry Log Pages: Not Supported 00:20:01.151 Persistent Event Log Pages: Not Supported 00:20:01.152 Supported Log Pages Log Page: May Support 00:20:01.152 Commands Supported & Effects Log Page: Not Supported 00:20:01.152 Feature Identifiers & Effects Log Page:May Support 00:20:01.152 NVMe-MI Commands & Effects Log Page: May Support 00:20:01.152 Data Area 4 for Telemetry Log: Not Supported 00:20:01.152 Error Log Page Entries Supported: 128 00:20:01.152 Keep Alive: Not Supported 00:20:01.152 00:20:01.152 NVM Command Set Attributes 00:20:01.152 ========================== 00:20:01.152 Submission Queue Entry Size 00:20:01.152 Max: 1 00:20:01.152 Min: 1 00:20:01.152 Completion Queue Entry Size 00:20:01.152 Max: 1 00:20:01.152 Min: 1 00:20:01.152 Number of Namespaces: 0 00:20:01.152 Compare Command: Not Supported 00:20:01.152 Write Uncorrectable Command: Not Supported 00:20:01.152 Dataset Management Command: Not Supported 00:20:01.152 Write Zeroes Command: Not Supported 00:20:01.152 Set Features Save Field: Not Supported 00:20:01.152 Reservations: Not Supported 00:20:01.152 Timestamp: Not Supported 00:20:01.152 Copy: Not Supported 00:20:01.152 Volatile Write Cache: Not Present 00:20:01.152 Atomic Write Unit (Normal): 1 00:20:01.152 Atomic Write Unit (PFail): 1 00:20:01.152 Atomic Compare & Write Unit: 1 00:20:01.152 Fused Compare & Write: Supported 00:20:01.152 Scatter-Gather List 00:20:01.152 SGL Command Set: Supported 00:20:01.152 SGL Keyed: Supported 00:20:01.152 SGL Bit Bucket Descriptor: Not Supported 00:20:01.152 SGL Metadata Pointer: Not Supported 00:20:01.152 Oversized SGL: Not Supported 00:20:01.152 SGL Metadata Address: Not Supported 00:20:01.152 SGL Offset: Supported 00:20:01.152 Transport SGL Data Block: Not Supported 00:20:01.152 Replay Protected Memory Block: Not Supported 00:20:01.152 00:20:01.152 Firmware Slot Information 00:20:01.152 ========================= 00:20:01.152 Active slot: 0 00:20:01.152 00:20:01.152 00:20:01.152 Error Log 00:20:01.152 ========= 00:20:01.152 00:20:01.152 Active Namespaces 00:20:01.152 ================= 00:20:01.152 Discovery Log Page 00:20:01.152 ================== 00:20:01.152 Generation Counter: 2 00:20:01.152 Number of Records: 2 00:20:01.152 Record Format: 0 00:20:01.152 00:20:01.152 Discovery Log Entry 0 00:20:01.152 ---------------------- 00:20:01.152 Transport Type: 3 (TCP) 00:20:01.152 Address Family: 1 (IPv4) 00:20:01.152 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:01.152 Entry Flags: 00:20:01.152 Duplicate Returned Information: 1 00:20:01.152 Explicit Persistent Connection Support for Discovery: 1 00:20:01.152 Transport Requirements: 00:20:01.152 Secure Channel: Not Required 00:20:01.152 Port ID: 0 (0x0000) 00:20:01.152 Controller ID: 65535 (0xffff) 00:20:01.152 Admin Max SQ Size: 128 00:20:01.152 Transport Service Identifier: 4420 00:20:01.152 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:01.152 Transport Address: 10.0.0.2 00:20:01.152 Discovery Log Entry 1 00:20:01.152 ---------------------- 00:20:01.152 Transport Type: 3 (TCP) 00:20:01.152 Address Family: 1 (IPv4) 00:20:01.152 Subsystem Type: 2 (NVM Subsystem) 00:20:01.152 Entry Flags: 00:20:01.152 Duplicate Returned Information: 0 00:20:01.152 Explicit Persistent Connection Support for Discovery: 0 00:20:01.152 Transport Requirements: 00:20:01.152 Secure Channel: Not Required 00:20:01.152 Port ID: 0 (0x0000) 00:20:01.152 Controller ID: 65535 (0xffff) 00:20:01.152 Admin Max SQ Size: 128 00:20:01.152 Transport Service Identifier: 4420 00:20:01.152 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:01.152 Transport Address: 10.0.0.2 [2024-11-20 22:40:01.786547] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:01.152 [2024-11-20 22:40:01.786566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.152 [2024-11-20 22:40:01.786573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.152 [2024-11-20 22:40:01.786579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.152 [2024-11-20 22:40:01.786584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.152 [2024-11-20 22:40:01.786593] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.152 [2024-11-20 22:40:01.786597] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.152 [2024-11-20 22:40:01.786600] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131d540) 00:20:01.152 [2024-11-20 22:40:01.786608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.152 [2024-11-20 22:40:01.786633] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356640, cid 3, qid 0 00:20:01.152 [2024-11-20 22:40:01.786713] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.152 [2024-11-20 22:40:01.786719] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.152 [2024-11-20 22:40:01.786723] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.152 [2024-11-20 22:40:01.786726] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1356640) on tqpair=0x131d540 00:20:01.152 [2024-11-20 22:40:01.786734] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.152 [2024-11-20 22:40:01.786737] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.152 [2024-11-20 22:40:01.786740] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131d540) 00:20:01.152 [2024-11-20 22:40:01.786747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.152 [2024-11-20 22:40:01.786769] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356640, cid 3, qid 0 00:20:01.152 [2024-11-20 22:40:01.786856] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.152 [2024-11-20 22:40:01.786863] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.152 [2024-11-20 22:40:01.786866] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.152 [2024-11-20 22:40:01.786869] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1356640) on tqpair=0x131d540 00:20:01.152 [2024-11-20 22:40:01.786875] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:01.152 [2024-11-20 22:40:01.786879] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:01.152 [2024-11-20 22:40:01.786888] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.152 [2024-11-20 22:40:01.786892] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.152 [2024-11-20 22:40:01.786895] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131d540) 00:20:01.152 [2024-11-20 22:40:01.786901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.152 [2024-11-20 22:40:01.786918] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356640, cid 3, qid 0 00:20:01.152 [2024-11-20 22:40:01.786994] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.152 [2024-11-20 22:40:01.787005] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.152 [2024-11-20 22:40:01.787009] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.152 [2024-11-20 22:40:01.787012] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1356640) on tqpair=0x131d540 00:20:01.152 [2024-11-20 22:40:01.787023] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.152 [2024-11-20 22:40:01.787027] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.152 [2024-11-20 22:40:01.787030] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131d540) 00:20:01.152 [2024-11-20 22:40:01.787036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.152 [2024-11-20 22:40:01.787054] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356640, cid 3, qid 0 00:20:01.152 [2024-11-20 22:40:01.787122] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.152 [2024-11-20 22:40:01.787132] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.152 [2024-11-20 22:40:01.787136] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.152 [2024-11-20 22:40:01.787139] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1356640) on tqpair=0x131d540 00:20:01.152 [2024-11-20 22:40:01.787149] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.152 [2024-11-20 22:40:01.787153] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.152 [2024-11-20 22:40:01.787156] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131d540) 00:20:01.152 [2024-11-20 22:40:01.787162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.152 [2024-11-20 22:40:01.787180] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356640, cid 3, qid 0 00:20:01.152 [2024-11-20 22:40:01.787240] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.152 [2024-11-20 22:40:01.787252] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.153 [2024-11-20 22:40:01.787256] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.787259] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1356640) on tqpair=0x131d540 00:20:01.153 [2024-11-20 22:40:01.787269] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.787273] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.787288] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131d540) 00:20:01.153 [2024-11-20 22:40:01.787295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.153 [2024-11-20 22:40:01.787314] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356640, cid 3, qid 0 00:20:01.153 [2024-11-20 22:40:01.787412] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.153 [2024-11-20 22:40:01.787424] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.153 [2024-11-20 22:40:01.787427] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.787431] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1356640) on tqpair=0x131d540 00:20:01.153 [2024-11-20 22:40:01.787441] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.787445] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.787448] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131d540) 00:20:01.153 [2024-11-20 22:40:01.787455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.153 [2024-11-20 22:40:01.787473] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356640, cid 3, qid 0 00:20:01.153 [2024-11-20 22:40:01.787534] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.153 [2024-11-20 22:40:01.787545] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.153 [2024-11-20 22:40:01.787548] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.787552] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1356640) on tqpair=0x131d540 00:20:01.153 [2024-11-20 22:40:01.787562] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.787566] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.787569] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131d540) 00:20:01.153 [2024-11-20 22:40:01.787575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.153 [2024-11-20 22:40:01.787593] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356640, cid 3, qid 0 00:20:01.153 [2024-11-20 22:40:01.787649] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.153 [2024-11-20 22:40:01.787655] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.153 [2024-11-20 22:40:01.787658] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.787661] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1356640) on tqpair=0x131d540 00:20:01.153 [2024-11-20 22:40:01.787670] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.787674] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.787678] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131d540) 00:20:01.153 [2024-11-20 22:40:01.787684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.153 [2024-11-20 22:40:01.787699] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356640, cid 3, qid 0 00:20:01.153 [2024-11-20 22:40:01.787756] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.153 [2024-11-20 22:40:01.787762] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.153 [2024-11-20 22:40:01.787765] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.787768] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1356640) on tqpair=0x131d540 00:20:01.153 [2024-11-20 22:40:01.787778] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.787781] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.787785] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131d540) 00:20:01.153 [2024-11-20 22:40:01.787791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.153 [2024-11-20 22:40:01.787807] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356640, cid 3, qid 0 00:20:01.153 [2024-11-20 22:40:01.787870] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.153 [2024-11-20 22:40:01.787881] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.153 [2024-11-20 22:40:01.787884] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.787888] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1356640) on tqpair=0x131d540 00:20:01.153 [2024-11-20 22:40:01.787898] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.787902] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.787905] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131d540) 00:20:01.153 [2024-11-20 22:40:01.787911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.153 [2024-11-20 22:40:01.787928] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356640, cid 3, qid 0 00:20:01.153 [2024-11-20 22:40:01.787989] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.153 [2024-11-20 22:40:01.787995] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.153 [2024-11-20 22:40:01.787998] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.788001] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1356640) on tqpair=0x131d540 00:20:01.153 [2024-11-20 22:40:01.788010] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.788014] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.788017] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131d540) 00:20:01.153 [2024-11-20 22:40:01.788023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.153 [2024-11-20 22:40:01.788039] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356640, cid 3, qid 0 00:20:01.153 [2024-11-20 22:40:01.788109] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.153 [2024-11-20 22:40:01.788114] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.153 [2024-11-20 22:40:01.788118] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.788121] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1356640) on tqpair=0x131d540 00:20:01.153 [2024-11-20 22:40:01.788130] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.788134] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.788137] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131d540) 00:20:01.153 [2024-11-20 22:40:01.788143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.153 [2024-11-20 22:40:01.788159] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356640, cid 3, qid 0 00:20:01.153 [2024-11-20 22:40:01.788227] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.153 [2024-11-20 22:40:01.788232] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.153 [2024-11-20 22:40:01.788235] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.788239] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1356640) on tqpair=0x131d540 00:20:01.153 [2024-11-20 22:40:01.788248] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.788252] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.153 [2024-11-20 22:40:01.788255] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131d540) 00:20:01.153 [2024-11-20 22:40:01.788261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.153 [2024-11-20 22:40:01.792295] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356640, cid 3, qid 0 00:20:01.153 [2024-11-20 22:40:01.792318] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.153 [2024-11-20 22:40:01.792325] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.154 [2024-11-20 22:40:01.792328] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.154 [2024-11-20 22:40:01.792332] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1356640) on tqpair=0x131d540 00:20:01.154 [2024-11-20 22:40:01.792344] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.154 [2024-11-20 22:40:01.792349] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.154 [2024-11-20 22:40:01.792352] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131d540) 00:20:01.154 [2024-11-20 22:40:01.792359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.154 [2024-11-20 22:40:01.792382] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1356640, cid 3, qid 0 00:20:01.154 [2024-11-20 22:40:01.792451] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.154 [2024-11-20 22:40:01.792457] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.154 [2024-11-20 22:40:01.792460] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.154 [2024-11-20 22:40:01.792463] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1356640) on tqpair=0x131d540 00:20:01.154 [2024-11-20 22:40:01.792471] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:20:01.154 00:20:01.154 22:40:01 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:01.154 [2024-11-20 22:40:01.824520] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:01.154 [2024-11-20 22:40:01.824554] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93402 ] 00:20:01.416 [2024-11-20 22:40:01.955686] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:01.416 [2024-11-20 22:40:01.955742] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:01.416 [2024-11-20 22:40:01.955748] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:01.416 [2024-11-20 22:40:01.955758] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:01.416 [2024-11-20 22:40:01.955768] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:01.416 [2024-11-20 22:40:01.955864] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:01.416 [2024-11-20 22:40:01.955908] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1730540 0 00:20:01.416 [2024-11-20 22:40:01.962295] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:01.416 [2024-11-20 22:40:01.962314] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:01.416 [2024-11-20 22:40:01.962319] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:01.416 [2024-11-20 22:40:01.962323] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:01.416 [2024-11-20 22:40:01.962367] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.416 [2024-11-20 22:40:01.962373] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.416 [2024-11-20 22:40:01.962377] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1730540) 00:20:01.416 [2024-11-20 22:40:01.962388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:01.416 [2024-11-20 22:40:01.962415] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1769220, cid 0, qid 0 00:20:01.416 [2024-11-20 22:40:01.970293] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.416 [2024-11-20 22:40:01.970313] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.416 [2024-11-20 22:40:01.970317] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.416 [2024-11-20 22:40:01.970321] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1769220) on tqpair=0x1730540 00:20:01.416 [2024-11-20 22:40:01.970330] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:01.416 [2024-11-20 22:40:01.970336] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:01.416 [2024-11-20 22:40:01.970342] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:01.416 [2024-11-20 22:40:01.970356] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.416 [2024-11-20 22:40:01.970361] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.416 [2024-11-20 22:40:01.970365] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1730540) 00:20:01.416 [2024-11-20 22:40:01.970373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.416 [2024-11-20 22:40:01.970398] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1769220, cid 0, qid 0 00:20:01.416 [2024-11-20 22:40:01.970474] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.416 [2024-11-20 22:40:01.970480] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.416 [2024-11-20 22:40:01.970483] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.416 [2024-11-20 22:40:01.970487] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1769220) on tqpair=0x1730540 00:20:01.416 [2024-11-20 22:40:01.970492] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:01.416 [2024-11-20 22:40:01.970499] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:01.416 [2024-11-20 22:40:01.970506] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.416 [2024-11-20 22:40:01.970510] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.416 [2024-11-20 22:40:01.970513] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1730540) 00:20:01.416 [2024-11-20 22:40:01.970520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.416 [2024-11-20 22:40:01.970538] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1769220, cid 0, qid 0 00:20:01.416 [2024-11-20 22:40:01.970886] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.416 [2024-11-20 22:40:01.970900] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.416 [2024-11-20 22:40:01.970904] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.416 [2024-11-20 22:40:01.970907] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1769220) on tqpair=0x1730540 00:20:01.416 [2024-11-20 22:40:01.970913] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:01.416 [2024-11-20 22:40:01.970921] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:01.416 [2024-11-20 22:40:01.970928] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.416 [2024-11-20 22:40:01.970932] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.416 [2024-11-20 22:40:01.970936] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1730540) 00:20:01.416 [2024-11-20 22:40:01.970944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.416 [2024-11-20 22:40:01.970963] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1769220, cid 0, qid 0 00:20:01.416 [2024-11-20 22:40:01.971026] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.416 [2024-11-20 22:40:01.971032] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.416 [2024-11-20 22:40:01.971035] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.416 [2024-11-20 22:40:01.971039] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1769220) on tqpair=0x1730540 00:20:01.416 [2024-11-20 22:40:01.971045] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:01.416 [2024-11-20 22:40:01.971053] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.416 [2024-11-20 22:40:01.971058] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.416 [2024-11-20 22:40:01.971061] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1730540) 00:20:01.417 [2024-11-20 22:40:01.971067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.417 [2024-11-20 22:40:01.971084] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1769220, cid 0, qid 0 00:20:01.417 [2024-11-20 22:40:01.971449] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.417 [2024-11-20 22:40:01.971462] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.417 [2024-11-20 22:40:01.971466] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.417 [2024-11-20 22:40:01.971470] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1769220) on tqpair=0x1730540 00:20:01.417 [2024-11-20 22:40:01.971475] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:01.417 [2024-11-20 22:40:01.971480] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:01.417 [2024-11-20 22:40:01.971487] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:01.417 [2024-11-20 22:40:01.971593] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:01.417 [2024-11-20 22:40:01.971597] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:01.417 [2024-11-20 22:40:01.971604] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.417 [2024-11-20 22:40:01.971608] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.417 [2024-11-20 22:40:01.971611] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1730540) 00:20:01.417 [2024-11-20 22:40:01.971618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.417 [2024-11-20 22:40:01.971638] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1769220, cid 0, qid 0 00:20:01.417 [2024-11-20 22:40:01.971887] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.417 [2024-11-20 22:40:01.971899] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.417 [2024-11-20 22:40:01.971903] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.417 [2024-11-20 22:40:01.971907] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1769220) on tqpair=0x1730540 00:20:01.417 [2024-11-20 22:40:01.971912] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:01.417 [2024-11-20 22:40:01.971922] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.417 [2024-11-20 22:40:01.971926] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.417 [2024-11-20 22:40:01.971929] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1730540) 00:20:01.417 [2024-11-20 22:40:01.971936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.417 [2024-11-20 22:40:01.971954] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1769220, cid 0, qid 0 00:20:01.417 [2024-11-20 22:40:01.972017] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.417 [2024-11-20 22:40:01.972023] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.417 [2024-11-20 22:40:01.972026] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.417 [2024-11-20 22:40:01.972030] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1769220) on tqpair=0x1730540 00:20:01.417 [2024-11-20 22:40:01.972034] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:01.417 [2024-11-20 22:40:01.972039] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:01.417 [2024-11-20 22:40:01.972046] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:01.417 [2024-11-20 22:40:01.972061] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:01.417 [2024-11-20 22:40:01.972070] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.417 [2024-11-20 22:40:01.972074] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.417 [2024-11-20 22:40:01.972077] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1730540) 00:20:01.417 [2024-11-20 22:40:01.972084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.417 [2024-11-20 22:40:01.972103] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1769220, cid 0, qid 0 00:20:01.417 [2024-11-20 22:40:01.972442] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:01.417 [2024-11-20 22:40:01.972456] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:01.417 [2024-11-20 22:40:01.972460] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:01.417 [2024-11-20 22:40:01.972464] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1730540): datao=0, datal=4096, cccid=0 00:20:01.417 [2024-11-20 22:40:01.972468] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1769220) on tqpair(0x1730540): expected_datao=0, payload_size=4096 00:20:01.417 [2024-11-20 22:40:01.972476] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:01.417 [2024-11-20 22:40:01.972480] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:01.417 [2024-11-20 22:40:01.972487] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.417 [2024-11-20 22:40:01.972493] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.417 [2024-11-20 22:40:01.972496] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.417 [2024-11-20 22:40:01.972499] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1769220) on tqpair=0x1730540 00:20:01.417 [2024-11-20 22:40:01.972507] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:01.417 [2024-11-20 22:40:01.972512] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:01.417 [2024-11-20 22:40:01.972516] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:01.417 [2024-11-20 22:40:01.972520] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:01.417 [2024-11-20 22:40:01.972524] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:01.417 [2024-11-20 22:40:01.972529] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:01.417 [2024-11-20 22:40:01.972541] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:01.417 [2024-11-20 22:40:01.972548] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.417 [2024-11-20 22:40:01.972552] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.417 [2024-11-20 22:40:01.972556] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1730540) 00:20:01.417 [2024-11-20 22:40:01.972563] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:01.417 [2024-11-20 22:40:01.972595] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1769220, cid 0, qid 0 00:20:01.417 [2024-11-20 22:40:01.972806] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.417 [2024-11-20 22:40:01.972812] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.417 [2024-11-20 22:40:01.972815] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.417 [2024-11-20 22:40:01.972819] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1769220) on tqpair=0x1730540 00:20:01.417 [2024-11-20 22:40:01.972826] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.417 [2024-11-20 22:40:01.972829] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.417 [2024-11-20 22:40:01.972832] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1730540) 00:20:01.417 [2024-11-20 22:40:01.972838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:01.417 [2024-11-20 22:40:01.972844] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.417 [2024-11-20 22:40:01.972847] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.417 [2024-11-20 22:40:01.972850] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1730540) 00:20:01.417 [2024-11-20 22:40:01.972856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:01.417 [2024-11-20 22:40:01.972861] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.417 [2024-11-20 22:40:01.972864] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.417 [2024-11-20 22:40:01.972868] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1730540) 00:20:01.417 [2024-11-20 22:40:01.972873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:01.417 [2024-11-20 22:40:01.972878] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.417 [2024-11-20 22:40:01.972881] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.417 [2024-11-20 22:40:01.972884] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1730540) 00:20:01.417 [2024-11-20 22:40:01.972889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:01.417 [2024-11-20 22:40:01.972894] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:01.417 [2024-11-20 22:40:01.972905] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:01.418 [2024-11-20 22:40:01.972912] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.418 [2024-11-20 22:40:01.972915] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.418 [2024-11-20 22:40:01.972919] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1730540) 00:20:01.418 [2024-11-20 22:40:01.972925] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.418 [2024-11-20 22:40:01.972944] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1769220, cid 0, qid 0 00:20:01.418 [2024-11-20 22:40:01.972951] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1769380, cid 1, qid 0 00:20:01.418 [2024-11-20 22:40:01.972955] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17694e0, cid 2, qid 0 00:20:01.418 [2024-11-20 22:40:01.972959] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1769640, cid 3, qid 0 00:20:01.418 [2024-11-20 22:40:01.972963] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17697a0, cid 4, qid 0 00:20:01.418 [2024-11-20 22:40:01.973352] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.418 [2024-11-20 22:40:01.973365] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.418 [2024-11-20 22:40:01.973369] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.418 [2024-11-20 22:40:01.973372] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17697a0) on tqpair=0x1730540 00:20:01.418 [2024-11-20 22:40:01.973378] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:01.418 [2024-11-20 22:40:01.973383] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:01.418 [2024-11-20 22:40:01.973391] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:01.418 [2024-11-20 22:40:01.973406] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:01.418 [2024-11-20 22:40:01.973413] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.418 [2024-11-20 22:40:01.973417] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.418 [2024-11-20 22:40:01.973421] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1730540) 00:20:01.418 [2024-11-20 22:40:01.973427] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:01.418 [2024-11-20 22:40:01.973448] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17697a0, cid 4, qid 0 00:20:01.418 [2024-11-20 22:40:01.973518] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.418 [2024-11-20 22:40:01.973524] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.418 [2024-11-20 22:40:01.973527] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.418 [2024-11-20 22:40:01.973531] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17697a0) on tqpair=0x1730540 00:20:01.418 [2024-11-20 22:40:01.973603] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:01.418 [2024-11-20 22:40:01.973614] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:01.418 [2024-11-20 22:40:01.973622] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.418 [2024-11-20 22:40:01.973626] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.418 [2024-11-20 22:40:01.973629] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1730540) 00:20:01.418 [2024-11-20 22:40:01.973636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.418 [2024-11-20 22:40:01.973655] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17697a0, cid 4, qid 0 00:20:01.418 [2024-11-20 22:40:01.974081] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:01.418 [2024-11-20 22:40:01.974094] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:01.418 [2024-11-20 22:40:01.974099] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:01.418 [2024-11-20 22:40:01.974102] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1730540): datao=0, datal=4096, cccid=4 00:20:01.418 [2024-11-20 22:40:01.974106] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17697a0) on tqpair(0x1730540): expected_datao=0, payload_size=4096 00:20:01.418 [2024-11-20 22:40:01.974113] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:01.418 [2024-11-20 22:40:01.974117] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:01.418 [2024-11-20 22:40:01.974125] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.418 [2024-11-20 22:40:01.974130] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.418 [2024-11-20 22:40:01.974134] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.418 [2024-11-20 22:40:01.974137] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17697a0) on tqpair=0x1730540 00:20:01.418 [2024-11-20 22:40:01.974153] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:01.418 [2024-11-20 22:40:01.974165] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:01.418 [2024-11-20 22:40:01.974175] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:01.418 [2024-11-20 22:40:01.974182] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.418 [2024-11-20 22:40:01.974186] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.418 [2024-11-20 22:40:01.974189] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1730540) 00:20:01.418 [2024-11-20 22:40:01.974196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.418 [2024-11-20 22:40:01.974215] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17697a0, cid 4, qid 0 00:20:01.418 [2024-11-20 22:40:01.978298] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:01.418 [2024-11-20 22:40:01.978314] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:01.418 [2024-11-20 22:40:01.978319] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:01.418 [2024-11-20 22:40:01.978322] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1730540): datao=0, datal=4096, cccid=4 00:20:01.418 [2024-11-20 22:40:01.978327] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17697a0) on tqpair(0x1730540): expected_datao=0, payload_size=4096 00:20:01.418 [2024-11-20 22:40:01.978334] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:01.418 [2024-11-20 22:40:01.978337] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:01.418 [2024-11-20 22:40:01.978342] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.418 [2024-11-20 22:40:01.978347] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.418 [2024-11-20 22:40:01.978351] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.418 [2024-11-20 22:40:01.978355] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17697a0) on tqpair=0x1730540 00:20:01.418 [2024-11-20 22:40:01.978375] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:01.418 [2024-11-20 22:40:01.978387] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:01.418 [2024-11-20 22:40:01.978396] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.418 [2024-11-20 22:40:01.978400] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.418 [2024-11-20 22:40:01.978404] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1730540) 00:20:01.418 [2024-11-20 22:40:01.978411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.418 [2024-11-20 22:40:01.978435] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17697a0, cid 4, qid 0 00:20:01.418 [2024-11-20 22:40:01.978534] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:01.418 [2024-11-20 22:40:01.978540] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:01.418 [2024-11-20 22:40:01.978543] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:01.418 [2024-11-20 22:40:01.978546] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1730540): datao=0, datal=4096, cccid=4 00:20:01.418 [2024-11-20 22:40:01.978551] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17697a0) on tqpair(0x1730540): expected_datao=0, payload_size=4096 00:20:01.418 [2024-11-20 22:40:01.978557] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:01.418 [2024-11-20 22:40:01.978561] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:01.418 [2024-11-20 22:40:01.978873] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.418 [2024-11-20 22:40:01.978887] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.418 [2024-11-20 22:40:01.978891] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.418 [2024-11-20 22:40:01.978895] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17697a0) on tqpair=0x1730540 00:20:01.418 [2024-11-20 22:40:01.978904] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:01.418 [2024-11-20 22:40:01.978912] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:01.418 [2024-11-20 22:40:01.978922] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:01.419 [2024-11-20 22:40:01.978929] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:01.419 [2024-11-20 22:40:01.978934] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:01.419 [2024-11-20 22:40:01.978938] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:01.419 [2024-11-20 22:40:01.978943] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:01.419 [2024-11-20 22:40:01.978948] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:01.419 [2024-11-20 22:40:01.978961] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.978966] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.978969] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1730540) 00:20:01.419 [2024-11-20 22:40:01.978975] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.419 [2024-11-20 22:40:01.978982] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.978985] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.978988] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1730540) 00:20:01.419 [2024-11-20 22:40:01.978994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:01.419 [2024-11-20 22:40:01.979019] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17697a0, cid 4, qid 0 00:20:01.419 [2024-11-20 22:40:01.979026] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1769900, cid 5, qid 0 00:20:01.419 [2024-11-20 22:40:01.979421] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.419 [2024-11-20 22:40:01.979435] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.419 [2024-11-20 22:40:01.979440] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.979443] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17697a0) on tqpair=0x1730540 00:20:01.419 [2024-11-20 22:40:01.979450] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.419 [2024-11-20 22:40:01.979455] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.419 [2024-11-20 22:40:01.979459] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.979462] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1769900) on tqpair=0x1730540 00:20:01.419 [2024-11-20 22:40:01.979474] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.979478] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.979481] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1730540) 00:20:01.419 [2024-11-20 22:40:01.979488] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.419 [2024-11-20 22:40:01.979509] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1769900, cid 5, qid 0 00:20:01.419 [2024-11-20 22:40:01.979658] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.419 [2024-11-20 22:40:01.979664] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.419 [2024-11-20 22:40:01.979667] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.979670] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1769900) on tqpair=0x1730540 00:20:01.419 [2024-11-20 22:40:01.979680] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.979684] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.979687] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1730540) 00:20:01.419 [2024-11-20 22:40:01.979693] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.419 [2024-11-20 22:40:01.979708] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1769900, cid 5, qid 0 00:20:01.419 [2024-11-20 22:40:01.979980] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.419 [2024-11-20 22:40:01.979992] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.419 [2024-11-20 22:40:01.979996] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.980000] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1769900) on tqpair=0x1730540 00:20:01.419 [2024-11-20 22:40:01.980010] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.980014] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.980017] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1730540) 00:20:01.419 [2024-11-20 22:40:01.980024] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.419 [2024-11-20 22:40:01.980042] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1769900, cid 5, qid 0 00:20:01.419 [2024-11-20 22:40:01.980099] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.419 [2024-11-20 22:40:01.980105] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.419 [2024-11-20 22:40:01.980115] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.980118] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1769900) on tqpair=0x1730540 00:20:01.419 [2024-11-20 22:40:01.980130] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.980135] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.980138] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1730540) 00:20:01.419 [2024-11-20 22:40:01.980144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.419 [2024-11-20 22:40:01.980151] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.980154] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.980158] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1730540) 00:20:01.419 [2024-11-20 22:40:01.980163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.419 [2024-11-20 22:40:01.980169] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.980173] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.980176] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1730540) 00:20:01.419 [2024-11-20 22:40:01.980182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.419 [2024-11-20 22:40:01.980188] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.980192] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.980196] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1730540) 00:20:01.419 [2024-11-20 22:40:01.980201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.419 [2024-11-20 22:40:01.980219] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1769900, cid 5, qid 0 00:20:01.419 [2024-11-20 22:40:01.980226] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17697a0, cid 4, qid 0 00:20:01.419 [2024-11-20 22:40:01.980230] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1769a60, cid 6, qid 0 00:20:01.419 [2024-11-20 22:40:01.980234] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1769bc0, cid 7, qid 0 00:20:01.419 [2024-11-20 22:40:01.980672] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:01.419 [2024-11-20 22:40:01.980698] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:01.419 [2024-11-20 22:40:01.980702] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.980705] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1730540): datao=0, datal=8192, cccid=5 00:20:01.419 [2024-11-20 22:40:01.980710] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1769900) on tqpair(0x1730540): expected_datao=0, payload_size=8192 00:20:01.419 [2024-11-20 22:40:01.980726] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.980731] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.980736] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:01.419 [2024-11-20 22:40:01.980741] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:01.419 [2024-11-20 22:40:01.980744] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:01.419 [2024-11-20 22:40:01.980747] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1730540): datao=0, datal=512, cccid=4 00:20:01.420 [2024-11-20 22:40:01.980751] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17697a0) on tqpair(0x1730540): expected_datao=0, payload_size=512 00:20:01.420 [2024-11-20 22:40:01.980757] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:01.420 [2024-11-20 22:40:01.980760] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:01.420 [2024-11-20 22:40:01.980765] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:01.420 [2024-11-20 22:40:01.980770] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:01.420 [2024-11-20 22:40:01.980773] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:01.420 [2024-11-20 22:40:01.980776] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1730540): datao=0, datal=512, cccid=6 00:20:01.420 [2024-11-20 22:40:01.980780] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1769a60) on tqpair(0x1730540): expected_datao=0, payload_size=512 00:20:01.420 [2024-11-20 22:40:01.980786] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:01.420 [2024-11-20 22:40:01.980789] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:01.420 [2024-11-20 22:40:01.980794] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:01.420 [2024-11-20 22:40:01.980799] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:01.420 [2024-11-20 22:40:01.980802] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:01.420 [2024-11-20 22:40:01.980805] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1730540): datao=0, datal=4096, cccid=7 00:20:01.420 [2024-11-20 22:40:01.980809] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1769bc0) on tqpair(0x1730540): expected_datao=0, payload_size=4096 00:20:01.420 [2024-11-20 22:40:01.980815] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:01.420 [2024-11-20 22:40:01.980818] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:01.420 [2024-11-20 22:40:01.980825] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.420 [2024-11-20 22:40:01.980830] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.420 [2024-11-20 22:40:01.980833] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.420 [2024-11-20 22:40:01.980836] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1769900) on tqpair=0x1730540 00:20:01.420 [2024-11-20 22:40:01.980859] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.420 [2024-11-20 22:40:01.980865] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.420 [2024-11-20 22:40:01.980868] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.420 [2024-11-20 22:40:01.980871] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17697a0) on tqpair=0x1730540 00:20:01.420 [2024-11-20 22:40:01.980881] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.420 [2024-11-20 22:40:01.980886] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.420 [2024-11-20 22:40:01.980889] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.420 [2024-11-20 22:40:01.980892] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1769a60) on tqpair=0x1730540 00:20:01.420 [2024-11-20 22:40:01.980899] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.420 [2024-11-20 22:40:01.980905] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.420 [2024-11-20 22:40:01.980908] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.420 [2024-11-20 22:40:01.980922] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1769bc0) on tqpair=0x1730540 00:20:01.420 ===================================================== 00:20:01.420 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:01.420 ===================================================== 00:20:01.420 Controller Capabilities/Features 00:20:01.420 ================================ 00:20:01.420 Vendor ID: 8086 00:20:01.420 Subsystem Vendor ID: 8086 00:20:01.420 Serial Number: SPDK00000000000001 00:20:01.420 Model Number: SPDK bdev Controller 00:20:01.420 Firmware Version: 24.01.1 00:20:01.420 Recommended Arb Burst: 6 00:20:01.420 IEEE OUI Identifier: e4 d2 5c 00:20:01.420 Multi-path I/O 00:20:01.420 May have multiple subsystem ports: Yes 00:20:01.420 May have multiple controllers: Yes 00:20:01.420 Associated with SR-IOV VF: No 00:20:01.420 Max Data Transfer Size: 131072 00:20:01.420 Max Number of Namespaces: 32 00:20:01.420 Max Number of I/O Queues: 127 00:20:01.420 NVMe Specification Version (VS): 1.3 00:20:01.420 NVMe Specification Version (Identify): 1.3 00:20:01.420 Maximum Queue Entries: 128 00:20:01.420 Contiguous Queues Required: Yes 00:20:01.420 Arbitration Mechanisms Supported 00:20:01.420 Weighted Round Robin: Not Supported 00:20:01.420 Vendor Specific: Not Supported 00:20:01.420 Reset Timeout: 15000 ms 00:20:01.420 Doorbell Stride: 4 bytes 00:20:01.420 NVM Subsystem Reset: Not Supported 00:20:01.420 Command Sets Supported 00:20:01.420 NVM Command Set: Supported 00:20:01.420 Boot Partition: Not Supported 00:20:01.420 Memory Page Size Minimum: 4096 bytes 00:20:01.420 Memory Page Size Maximum: 4096 bytes 00:20:01.420 Persistent Memory Region: Not Supported 00:20:01.420 Optional Asynchronous Events Supported 00:20:01.420 Namespace Attribute Notices: Supported 00:20:01.420 Firmware Activation Notices: Not Supported 00:20:01.420 ANA Change Notices: Not Supported 00:20:01.420 PLE Aggregate Log Change Notices: Not Supported 00:20:01.420 LBA Status Info Alert Notices: Not Supported 00:20:01.420 EGE Aggregate Log Change Notices: Not Supported 00:20:01.420 Normal NVM Subsystem Shutdown event: Not Supported 00:20:01.420 Zone Descriptor Change Notices: Not Supported 00:20:01.420 Discovery Log Change Notices: Not Supported 00:20:01.420 Controller Attributes 00:20:01.420 128-bit Host Identifier: Supported 00:20:01.420 Non-Operational Permissive Mode: Not Supported 00:20:01.420 NVM Sets: Not Supported 00:20:01.420 Read Recovery Levels: Not Supported 00:20:01.420 Endurance Groups: Not Supported 00:20:01.420 Predictable Latency Mode: Not Supported 00:20:01.420 Traffic Based Keep ALive: Not Supported 00:20:01.420 Namespace Granularity: Not Supported 00:20:01.420 SQ Associations: Not Supported 00:20:01.420 UUID List: Not Supported 00:20:01.420 Multi-Domain Subsystem: Not Supported 00:20:01.420 Fixed Capacity Management: Not Supported 00:20:01.420 Variable Capacity Management: Not Supported 00:20:01.420 Delete Endurance Group: Not Supported 00:20:01.420 Delete NVM Set: Not Supported 00:20:01.420 Extended LBA Formats Supported: Not Supported 00:20:01.420 Flexible Data Placement Supported: Not Supported 00:20:01.420 00:20:01.420 Controller Memory Buffer Support 00:20:01.420 ================================ 00:20:01.420 Supported: No 00:20:01.420 00:20:01.420 Persistent Memory Region Support 00:20:01.420 ================================ 00:20:01.420 Supported: No 00:20:01.420 00:20:01.420 Admin Command Set Attributes 00:20:01.420 ============================ 00:20:01.420 Security Send/Receive: Not Supported 00:20:01.420 Format NVM: Not Supported 00:20:01.420 Firmware Activate/Download: Not Supported 00:20:01.420 Namespace Management: Not Supported 00:20:01.420 Device Self-Test: Not Supported 00:20:01.420 Directives: Not Supported 00:20:01.420 NVMe-MI: Not Supported 00:20:01.420 Virtualization Management: Not Supported 00:20:01.420 Doorbell Buffer Config: Not Supported 00:20:01.420 Get LBA Status Capability: Not Supported 00:20:01.420 Command & Feature Lockdown Capability: Not Supported 00:20:01.420 Abort Command Limit: 4 00:20:01.420 Async Event Request Limit: 4 00:20:01.420 Number of Firmware Slots: N/A 00:20:01.420 Firmware Slot 1 Read-Only: N/A 00:20:01.420 Firmware Activation Without Reset: N/A 00:20:01.420 Multiple Update Detection Support: N/A 00:20:01.420 Firmware Update Granularity: No Information Provided 00:20:01.420 Per-Namespace SMART Log: No 00:20:01.420 Asymmetric Namespace Access Log Page: Not Supported 00:20:01.420 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:01.420 Command Effects Log Page: Supported 00:20:01.420 Get Log Page Extended Data: Supported 00:20:01.420 Telemetry Log Pages: Not Supported 00:20:01.420 Persistent Event Log Pages: Not Supported 00:20:01.421 Supported Log Pages Log Page: May Support 00:20:01.421 Commands Supported & Effects Log Page: Not Supported 00:20:01.421 Feature Identifiers & Effects Log Page:May Support 00:20:01.421 NVMe-MI Commands & Effects Log Page: May Support 00:20:01.421 Data Area 4 for Telemetry Log: Not Supported 00:20:01.421 Error Log Page Entries Supported: 128 00:20:01.421 Keep Alive: Supported 00:20:01.421 Keep Alive Granularity: 10000 ms 00:20:01.421 00:20:01.421 NVM Command Set Attributes 00:20:01.421 ========================== 00:20:01.421 Submission Queue Entry Size 00:20:01.421 Max: 64 00:20:01.421 Min: 64 00:20:01.421 Completion Queue Entry Size 00:20:01.421 Max: 16 00:20:01.421 Min: 16 00:20:01.421 Number of Namespaces: 32 00:20:01.421 Compare Command: Supported 00:20:01.421 Write Uncorrectable Command: Not Supported 00:20:01.421 Dataset Management Command: Supported 00:20:01.421 Write Zeroes Command: Supported 00:20:01.421 Set Features Save Field: Not Supported 00:20:01.421 Reservations: Supported 00:20:01.421 Timestamp: Not Supported 00:20:01.421 Copy: Supported 00:20:01.421 Volatile Write Cache: Present 00:20:01.421 Atomic Write Unit (Normal): 1 00:20:01.421 Atomic Write Unit (PFail): 1 00:20:01.421 Atomic Compare & Write Unit: 1 00:20:01.421 Fused Compare & Write: Supported 00:20:01.421 Scatter-Gather List 00:20:01.421 SGL Command Set: Supported 00:20:01.421 SGL Keyed: Supported 00:20:01.421 SGL Bit Bucket Descriptor: Not Supported 00:20:01.421 SGL Metadata Pointer: Not Supported 00:20:01.421 Oversized SGL: Not Supported 00:20:01.421 SGL Metadata Address: Not Supported 00:20:01.421 SGL Offset: Supported 00:20:01.421 Transport SGL Data Block: Not Supported 00:20:01.421 Replay Protected Memory Block: Not Supported 00:20:01.421 00:20:01.421 Firmware Slot Information 00:20:01.421 ========================= 00:20:01.421 Active slot: 1 00:20:01.421 Slot 1 Firmware Revision: 24.01.1 00:20:01.421 00:20:01.421 00:20:01.421 Commands Supported and Effects 00:20:01.421 ============================== 00:20:01.421 Admin Commands 00:20:01.421 -------------- 00:20:01.421 Get Log Page (02h): Supported 00:20:01.421 Identify (06h): Supported 00:20:01.421 Abort (08h): Supported 00:20:01.421 Set Features (09h): Supported 00:20:01.421 Get Features (0Ah): Supported 00:20:01.421 Asynchronous Event Request (0Ch): Supported 00:20:01.421 Keep Alive (18h): Supported 00:20:01.421 I/O Commands 00:20:01.421 ------------ 00:20:01.421 Flush (00h): Supported LBA-Change 00:20:01.421 Write (01h): Supported LBA-Change 00:20:01.421 Read (02h): Supported 00:20:01.421 Compare (05h): Supported 00:20:01.421 Write Zeroes (08h): Supported LBA-Change 00:20:01.421 Dataset Management (09h): Supported LBA-Change 00:20:01.421 Copy (19h): Supported LBA-Change 00:20:01.421 Unknown (79h): Supported LBA-Change 00:20:01.421 Unknown (7Ah): Supported 00:20:01.421 00:20:01.421 Error Log 00:20:01.421 ========= 00:20:01.421 00:20:01.421 Arbitration 00:20:01.421 =========== 00:20:01.421 Arbitration Burst: 1 00:20:01.421 00:20:01.421 Power Management 00:20:01.421 ================ 00:20:01.421 Number of Power States: 1 00:20:01.421 Current Power State: Power State #0 00:20:01.421 Power State #0: 00:20:01.421 Max Power: 0.00 W 00:20:01.421 Non-Operational State: Operational 00:20:01.421 Entry Latency: Not Reported 00:20:01.421 Exit Latency: Not Reported 00:20:01.421 Relative Read Throughput: 0 00:20:01.421 Relative Read Latency: 0 00:20:01.421 Relative Write Throughput: 0 00:20:01.421 Relative Write Latency: 0 00:20:01.421 Idle Power: Not Reported 00:20:01.421 Active Power: Not Reported 00:20:01.421 Non-Operational Permissive Mode: Not Supported 00:20:01.421 00:20:01.421 Health Information 00:20:01.421 ================== 00:20:01.421 Critical Warnings: 00:20:01.421 Available Spare Space: OK 00:20:01.421 Temperature: OK 00:20:01.421 Device Reliability: OK 00:20:01.421 Read Only: No 00:20:01.421 Volatile Memory Backup: OK 00:20:01.421 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:01.421 Temperature Threshold: [2024-11-20 22:40:01.981021] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.421 [2024-11-20 22:40:01.981028] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.421 [2024-11-20 22:40:01.981031] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1730540) 00:20:01.421 [2024-11-20 22:40:01.981038] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.421 [2024-11-20 22:40:01.981060] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1769bc0, cid 7, qid 0 00:20:01.421 [2024-11-20 22:40:01.981419] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.421 [2024-11-20 22:40:01.981433] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.421 [2024-11-20 22:40:01.981437] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.421 [2024-11-20 22:40:01.981441] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1769bc0) on tqpair=0x1730540 00:20:01.421 [2024-11-20 22:40:01.981474] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:01.421 [2024-11-20 22:40:01.981485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.421 [2024-11-20 22:40:01.981492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.421 [2024-11-20 22:40:01.981497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.421 [2024-11-20 22:40:01.981502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.422 [2024-11-20 22:40:01.981511] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.422 [2024-11-20 22:40:01.981515] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.422 [2024-11-20 22:40:01.981518] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1730540) 00:20:01.422 [2024-11-20 22:40:01.981525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.422 [2024-11-20 22:40:01.981568] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1769640, cid 3, qid 0 00:20:01.422 [2024-11-20 22:40:01.981799] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.422 [2024-11-20 22:40:01.981811] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.422 [2024-11-20 22:40:01.981816] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.422 [2024-11-20 22:40:01.981819] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1769640) on tqpair=0x1730540 00:20:01.422 [2024-11-20 22:40:01.981827] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.422 [2024-11-20 22:40:01.981832] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.422 [2024-11-20 22:40:01.981835] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1730540) 00:20:01.422 [2024-11-20 22:40:01.981841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.422 [2024-11-20 22:40:01.981874] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1769640, cid 3, qid 0 00:20:01.422 [2024-11-20 22:40:01.982144] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.422 [2024-11-20 22:40:01.982156] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.422 [2024-11-20 22:40:01.982160] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.422 [2024-11-20 22:40:01.982164] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1769640) on tqpair=0x1730540 00:20:01.422 [2024-11-20 22:40:01.982169] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:01.422 [2024-11-20 22:40:01.982173] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:01.422 [2024-11-20 22:40:01.982182] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.422 [2024-11-20 22:40:01.982187] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.422 [2024-11-20 22:40:01.982190] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1730540) 00:20:01.422 [2024-11-20 22:40:01.982196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.422 [2024-11-20 22:40:01.982215] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1769640, cid 3, qid 0 00:20:01.422 [2024-11-20 22:40:01.986307] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.422 [2024-11-20 22:40:01.986324] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.422 [2024-11-20 22:40:01.986329] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.422 [2024-11-20 22:40:01.986344] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1769640) on tqpair=0x1730540 00:20:01.422 [2024-11-20 22:40:01.986356] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:01.422 [2024-11-20 22:40:01.986361] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:01.422 [2024-11-20 22:40:01.986364] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1730540) 00:20:01.422 [2024-11-20 22:40:01.986371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.422 [2024-11-20 22:40:01.986395] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1769640, cid 3, qid 0 00:20:01.422 [2024-11-20 22:40:01.986655] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:01.422 [2024-11-20 22:40:01.986676] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:01.422 [2024-11-20 22:40:01.986680] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:01.422 [2024-11-20 22:40:01.986683] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1769640) on tqpair=0x1730540 00:20:01.422 [2024-11-20 22:40:01.986692] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:20:01.422 0 Kelvin (-273 Celsius) 00:20:01.422 Available Spare: 0% 00:20:01.422 Available Spare Threshold: 0% 00:20:01.422 Life Percentage Used: 0% 00:20:01.422 Data Units Read: 0 00:20:01.422 Data Units Written: 0 00:20:01.422 Host Read Commands: 0 00:20:01.422 Host Write Commands: 0 00:20:01.422 Controller Busy Time: 0 minutes 00:20:01.422 Power Cycles: 0 00:20:01.422 Power On Hours: 0 hours 00:20:01.422 Unsafe Shutdowns: 0 00:20:01.422 Unrecoverable Media Errors: 0 00:20:01.422 Lifetime Error Log Entries: 0 00:20:01.422 Warning Temperature Time: 0 minutes 00:20:01.422 Critical Temperature Time: 0 minutes 00:20:01.422 00:20:01.422 Number of Queues 00:20:01.422 ================ 00:20:01.422 Number of I/O Submission Queues: 127 00:20:01.422 Number of I/O Completion Queues: 127 00:20:01.422 00:20:01.422 Active Namespaces 00:20:01.422 ================= 00:20:01.422 Namespace ID:1 00:20:01.422 Error Recovery Timeout: Unlimited 00:20:01.422 Command Set Identifier: NVM (00h) 00:20:01.422 Deallocate: Supported 00:20:01.422 Deallocated/Unwritten Error: Not Supported 00:20:01.422 Deallocated Read Value: Unknown 00:20:01.422 Deallocate in Write Zeroes: Not Supported 00:20:01.422 Deallocated Guard Field: 0xFFFF 00:20:01.422 Flush: Supported 00:20:01.422 Reservation: Supported 00:20:01.422 Namespace Sharing Capabilities: Multiple Controllers 00:20:01.422 Size (in LBAs): 131072 (0GiB) 00:20:01.422 Capacity (in LBAs): 131072 (0GiB) 00:20:01.422 Utilization (in LBAs): 131072 (0GiB) 00:20:01.422 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:01.422 EUI64: ABCDEF0123456789 00:20:01.422 UUID: 73c92336-a2f1-4a7d-bf33-65ba781495bf 00:20:01.422 Thin Provisioning: Not Supported 00:20:01.422 Per-NS Atomic Units: Yes 00:20:01.422 Atomic Boundary Size (Normal): 0 00:20:01.422 Atomic Boundary Size (PFail): 0 00:20:01.422 Atomic Boundary Offset: 0 00:20:01.422 Maximum Single Source Range Length: 65535 00:20:01.422 Maximum Copy Length: 65535 00:20:01.422 Maximum Source Range Count: 1 00:20:01.422 NGUID/EUI64 Never Reused: No 00:20:01.422 Namespace Write Protected: No 00:20:01.422 Number of LBA Formats: 1 00:20:01.422 Current LBA Format: LBA Format #00 00:20:01.422 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:01.422 00:20:01.422 22:40:01 -- host/identify.sh@51 -- # sync 00:20:01.422 22:40:02 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:01.422 22:40:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.422 22:40:02 -- common/autotest_common.sh@10 -- # set +x 00:20:01.422 22:40:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.422 22:40:02 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:01.422 22:40:02 -- host/identify.sh@56 -- # nvmftestfini 00:20:01.422 22:40:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:01.422 22:40:02 -- nvmf/common.sh@116 -- # sync 00:20:01.422 22:40:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:01.422 22:40:02 -- nvmf/common.sh@119 -- # set +e 00:20:01.422 22:40:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:01.422 22:40:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:01.422 rmmod nvme_tcp 00:20:01.422 rmmod nvme_fabrics 00:20:01.422 rmmod nvme_keyring 00:20:01.422 22:40:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:01.422 22:40:02 -- nvmf/common.sh@123 -- # set -e 00:20:01.422 22:40:02 -- nvmf/common.sh@124 -- # return 0 00:20:01.422 22:40:02 -- nvmf/common.sh@477 -- # '[' -n 93336 ']' 00:20:01.422 22:40:02 -- nvmf/common.sh@478 -- # killprocess 93336 00:20:01.422 22:40:02 -- common/autotest_common.sh@936 -- # '[' -z 93336 ']' 00:20:01.422 22:40:02 -- common/autotest_common.sh@940 -- # kill -0 93336 00:20:01.422 22:40:02 -- common/autotest_common.sh@941 -- # uname 00:20:01.422 22:40:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:01.422 22:40:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93336 00:20:01.681 killing process with pid 93336 00:20:01.681 22:40:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:01.681 22:40:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:01.681 22:40:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93336' 00:20:01.681 22:40:02 -- common/autotest_common.sh@955 -- # kill 93336 00:20:01.681 [2024-11-20 22:40:02.171596] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:01.681 22:40:02 -- common/autotest_common.sh@960 -- # wait 93336 00:20:01.940 22:40:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:01.940 22:40:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:01.940 22:40:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:01.940 22:40:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:01.940 22:40:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:01.940 22:40:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.940 22:40:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:01.940 22:40:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.940 22:40:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:01.940 00:20:01.940 real 0m2.833s 00:20:01.940 user 0m7.622s 00:20:01.940 sys 0m0.787s 00:20:01.940 22:40:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:01.940 22:40:02 -- common/autotest_common.sh@10 -- # set +x 00:20:01.940 ************************************ 00:20:01.940 END TEST nvmf_identify 00:20:01.940 ************************************ 00:20:01.940 22:40:02 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:01.940 22:40:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:01.940 22:40:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:01.940 22:40:02 -- common/autotest_common.sh@10 -- # set +x 00:20:01.940 ************************************ 00:20:01.940 START TEST nvmf_perf 00:20:01.940 ************************************ 00:20:01.940 22:40:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:01.940 * Looking for test storage... 00:20:01.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:01.940 22:40:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:01.940 22:40:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:01.940 22:40:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:02.199 22:40:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:02.199 22:40:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:02.199 22:40:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:02.199 22:40:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:02.199 22:40:02 -- scripts/common.sh@335 -- # IFS=.-: 00:20:02.199 22:40:02 -- scripts/common.sh@335 -- # read -ra ver1 00:20:02.199 22:40:02 -- scripts/common.sh@336 -- # IFS=.-: 00:20:02.199 22:40:02 -- scripts/common.sh@336 -- # read -ra ver2 00:20:02.199 22:40:02 -- scripts/common.sh@337 -- # local 'op=<' 00:20:02.199 22:40:02 -- scripts/common.sh@339 -- # ver1_l=2 00:20:02.199 22:40:02 -- scripts/common.sh@340 -- # ver2_l=1 00:20:02.199 22:40:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:02.199 22:40:02 -- scripts/common.sh@343 -- # case "$op" in 00:20:02.199 22:40:02 -- scripts/common.sh@344 -- # : 1 00:20:02.199 22:40:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:02.199 22:40:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:02.199 22:40:02 -- scripts/common.sh@364 -- # decimal 1 00:20:02.199 22:40:02 -- scripts/common.sh@352 -- # local d=1 00:20:02.199 22:40:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:02.199 22:40:02 -- scripts/common.sh@354 -- # echo 1 00:20:02.199 22:40:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:02.199 22:40:02 -- scripts/common.sh@365 -- # decimal 2 00:20:02.199 22:40:02 -- scripts/common.sh@352 -- # local d=2 00:20:02.199 22:40:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:02.199 22:40:02 -- scripts/common.sh@354 -- # echo 2 00:20:02.199 22:40:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:02.200 22:40:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:02.200 22:40:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:02.200 22:40:02 -- scripts/common.sh@367 -- # return 0 00:20:02.200 22:40:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:02.200 22:40:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:02.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.200 --rc genhtml_branch_coverage=1 00:20:02.200 --rc genhtml_function_coverage=1 00:20:02.200 --rc genhtml_legend=1 00:20:02.200 --rc geninfo_all_blocks=1 00:20:02.200 --rc geninfo_unexecuted_blocks=1 00:20:02.200 00:20:02.200 ' 00:20:02.200 22:40:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:02.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.200 --rc genhtml_branch_coverage=1 00:20:02.200 --rc genhtml_function_coverage=1 00:20:02.200 --rc genhtml_legend=1 00:20:02.200 --rc geninfo_all_blocks=1 00:20:02.200 --rc geninfo_unexecuted_blocks=1 00:20:02.200 00:20:02.200 ' 00:20:02.200 22:40:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:02.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.200 --rc genhtml_branch_coverage=1 00:20:02.200 --rc genhtml_function_coverage=1 00:20:02.200 --rc genhtml_legend=1 00:20:02.200 --rc geninfo_all_blocks=1 00:20:02.200 --rc geninfo_unexecuted_blocks=1 00:20:02.200 00:20:02.200 ' 00:20:02.200 22:40:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:02.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.200 --rc genhtml_branch_coverage=1 00:20:02.200 --rc genhtml_function_coverage=1 00:20:02.200 --rc genhtml_legend=1 00:20:02.200 --rc geninfo_all_blocks=1 00:20:02.200 --rc geninfo_unexecuted_blocks=1 00:20:02.200 00:20:02.200 ' 00:20:02.200 22:40:02 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:02.200 22:40:02 -- nvmf/common.sh@7 -- # uname -s 00:20:02.200 22:40:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.200 22:40:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.200 22:40:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.200 22:40:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.200 22:40:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:02.200 22:40:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:02.200 22:40:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.200 22:40:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:02.200 22:40:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.200 22:40:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:02.200 22:40:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:20:02.200 22:40:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:20:02.200 22:40:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:02.200 22:40:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:02.200 22:40:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:02.200 22:40:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:02.200 22:40:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:02.200 22:40:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:02.200 22:40:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:02.200 22:40:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.200 22:40:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.200 22:40:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.200 22:40:02 -- paths/export.sh@5 -- # export PATH 00:20:02.200 22:40:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.200 22:40:02 -- nvmf/common.sh@46 -- # : 0 00:20:02.200 22:40:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:02.200 22:40:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:02.200 22:40:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:02.200 22:40:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:02.200 22:40:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:02.200 22:40:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:02.200 22:40:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:02.200 22:40:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:02.200 22:40:02 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:02.200 22:40:02 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:02.200 22:40:02 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:02.200 22:40:02 -- host/perf.sh@17 -- # nvmftestinit 00:20:02.200 22:40:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:02.200 22:40:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:02.200 22:40:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:02.200 22:40:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:02.200 22:40:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:02.200 22:40:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.200 22:40:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.200 22:40:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.200 22:40:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:02.200 22:40:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:02.200 22:40:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:02.200 22:40:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:02.200 22:40:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:02.200 22:40:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:02.200 22:40:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.200 22:40:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:02.200 22:40:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:02.200 22:40:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:02.200 22:40:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:02.200 22:40:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:02.200 22:40:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:02.200 22:40:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.200 22:40:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:02.200 22:40:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:02.200 22:40:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:02.200 22:40:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:02.200 22:40:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:02.200 22:40:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:02.200 Cannot find device "nvmf_tgt_br" 00:20:02.200 22:40:02 -- nvmf/common.sh@154 -- # true 00:20:02.200 22:40:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:02.200 Cannot find device "nvmf_tgt_br2" 00:20:02.200 22:40:02 -- nvmf/common.sh@155 -- # true 00:20:02.200 22:40:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:02.200 22:40:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:02.200 Cannot find device "nvmf_tgt_br" 00:20:02.200 22:40:02 -- nvmf/common.sh@157 -- # true 00:20:02.200 22:40:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:02.200 Cannot find device "nvmf_tgt_br2" 00:20:02.200 22:40:02 -- nvmf/common.sh@158 -- # true 00:20:02.200 22:40:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:02.200 22:40:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:02.200 22:40:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:02.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:02.200 22:40:02 -- nvmf/common.sh@161 -- # true 00:20:02.200 22:40:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:02.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:02.200 22:40:02 -- nvmf/common.sh@162 -- # true 00:20:02.200 22:40:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:02.200 22:40:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:02.459 22:40:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:02.459 22:40:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:02.459 22:40:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:02.459 22:40:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:02.459 22:40:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:02.459 22:40:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:02.459 22:40:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:02.459 22:40:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:02.459 22:40:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:02.459 22:40:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:02.459 22:40:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:02.459 22:40:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:02.459 22:40:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:02.459 22:40:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:02.459 22:40:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:02.459 22:40:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:02.459 22:40:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:02.459 22:40:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:02.459 22:40:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:02.459 22:40:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:02.459 22:40:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:02.459 22:40:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:02.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:02.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:20:02.459 00:20:02.459 --- 10.0.0.2 ping statistics --- 00:20:02.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.459 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:20:02.459 22:40:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:02.459 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:02.459 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:20:02.459 00:20:02.459 --- 10.0.0.3 ping statistics --- 00:20:02.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.459 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:02.459 22:40:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:02.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:02.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:20:02.459 00:20:02.459 --- 10.0.0.1 ping statistics --- 00:20:02.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.459 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:20:02.459 22:40:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:02.459 22:40:03 -- nvmf/common.sh@421 -- # return 0 00:20:02.459 22:40:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:02.459 22:40:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:02.459 22:40:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:02.459 22:40:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:02.459 22:40:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:02.459 22:40:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:02.459 22:40:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:02.459 22:40:03 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:02.459 22:40:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:02.459 22:40:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:02.459 22:40:03 -- common/autotest_common.sh@10 -- # set +x 00:20:02.459 22:40:03 -- nvmf/common.sh@469 -- # nvmfpid=93573 00:20:02.460 22:40:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:02.460 22:40:03 -- nvmf/common.sh@470 -- # waitforlisten 93573 00:20:02.460 22:40:03 -- common/autotest_common.sh@829 -- # '[' -z 93573 ']' 00:20:02.460 22:40:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.460 22:40:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:02.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.460 22:40:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.460 22:40:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:02.460 22:40:03 -- common/autotest_common.sh@10 -- # set +x 00:20:02.718 [2024-11-20 22:40:03.198157] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:02.718 [2024-11-20 22:40:03.198376] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.718 [2024-11-20 22:40:03.340541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:02.718 [2024-11-20 22:40:03.415250] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:02.718 [2024-11-20 22:40:03.415404] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.718 [2024-11-20 22:40:03.415416] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.718 [2024-11-20 22:40:03.415424] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.718 [2024-11-20 22:40:03.415593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.718 [2024-11-20 22:40:03.416211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.718 [2024-11-20 22:40:03.416371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:02.718 [2024-11-20 22:40:03.416374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.655 22:40:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:03.655 22:40:04 -- common/autotest_common.sh@862 -- # return 0 00:20:03.655 22:40:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:03.655 22:40:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:03.655 22:40:04 -- common/autotest_common.sh@10 -- # set +x 00:20:03.655 22:40:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.655 22:40:04 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:03.655 22:40:04 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:04.221 22:40:04 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:04.221 22:40:04 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:04.222 22:40:04 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:20:04.222 22:40:04 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:04.788 22:40:05 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:04.788 22:40:05 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:20:04.788 22:40:05 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:04.788 22:40:05 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:04.788 22:40:05 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:04.788 [2024-11-20 22:40:05.428392] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.788 22:40:05 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:05.047 22:40:05 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:05.047 22:40:05 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:05.306 22:40:05 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:05.306 22:40:05 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:05.564 22:40:06 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:05.823 [2024-11-20 22:40:06.306089] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.823 22:40:06 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:05.823 22:40:06 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:20:05.823 22:40:06 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:05.823 22:40:06 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:05.823 22:40:06 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:07.201 Initializing NVMe Controllers 00:20:07.201 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:20:07.201 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:20:07.201 Initialization complete. Launching workers. 00:20:07.201 ======================================================== 00:20:07.201 Latency(us) 00:20:07.201 Device Information : IOPS MiB/s Average min max 00:20:07.201 PCIE (0000:00:06.0) NSID 1 from core 0: 23460.80 91.64 1364.12 215.40 7530.92 00:20:07.201 ======================================================== 00:20:07.201 Total : 23460.80 91.64 1364.12 215.40 7530.92 00:20:07.201 00:20:07.201 22:40:07 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:08.584 Initializing NVMe Controllers 00:20:08.584 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:08.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:08.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:08.584 Initialization complete. Launching workers. 00:20:08.584 ======================================================== 00:20:08.584 Latency(us) 00:20:08.584 Device Information : IOPS MiB/s Average min max 00:20:08.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3541.38 13.83 282.05 105.15 7089.33 00:20:08.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.87 0.49 8071.56 5001.52 12001.35 00:20:08.584 ======================================================== 00:20:08.584 Total : 3666.25 14.32 547.36 105.15 12001.35 00:20:08.584 00:20:08.584 22:40:08 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:09.960 Initializing NVMe Controllers 00:20:09.960 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:09.960 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:09.960 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:09.960 Initialization complete. Launching workers. 00:20:09.960 ======================================================== 00:20:09.960 Latency(us) 00:20:09.960 Device Information : IOPS MiB/s Average min max 00:20:09.960 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10111.37 39.50 3165.17 627.56 8982.62 00:20:09.960 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2641.05 10.32 12226.29 7358.94 24115.74 00:20:09.960 ======================================================== 00:20:09.960 Total : 12752.42 49.81 5041.75 627.56 24115.74 00:20:09.960 00:20:09.960 22:40:10 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:09.960 22:40:10 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:12.496 Initializing NVMe Controllers 00:20:12.496 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:12.496 Controller IO queue size 128, less than required. 00:20:12.496 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:12.496 Controller IO queue size 128, less than required. 00:20:12.496 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:12.496 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:12.496 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:12.496 Initialization complete. Launching workers. 00:20:12.496 ======================================================== 00:20:12.496 Latency(us) 00:20:12.496 Device Information : IOPS MiB/s Average min max 00:20:12.496 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1712.16 428.04 75668.70 46875.33 129613.47 00:20:12.496 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 580.04 145.01 232608.25 70489.53 352491.43 00:20:12.496 ======================================================== 00:20:12.496 Total : 2292.20 573.05 115382.06 46875.33 352491.43 00:20:12.496 00:20:12.496 22:40:12 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:12.496 No valid NVMe controllers or AIO or URING devices found 00:20:12.496 Initializing NVMe Controllers 00:20:12.496 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:12.496 Controller IO queue size 128, less than required. 00:20:12.496 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:12.496 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:12.496 Controller IO queue size 128, less than required. 00:20:12.496 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:12.496 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:12.496 WARNING: Some requested NVMe devices were skipped 00:20:12.496 22:40:13 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:15.031 Initializing NVMe Controllers 00:20:15.031 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:15.031 Controller IO queue size 128, less than required. 00:20:15.031 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:15.031 Controller IO queue size 128, less than required. 00:20:15.031 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:15.031 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:15.031 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:15.031 Initialization complete. Launching workers. 00:20:15.031 00:20:15.031 ==================== 00:20:15.031 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:15.031 TCP transport: 00:20:15.031 polls: 11684 00:20:15.031 idle_polls: 8768 00:20:15.031 sock_completions: 2916 00:20:15.031 nvme_completions: 3938 00:20:15.031 submitted_requests: 6020 00:20:15.031 queued_requests: 1 00:20:15.031 00:20:15.031 ==================== 00:20:15.031 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:15.031 TCP transport: 00:20:15.031 polls: 11861 00:20:15.031 idle_polls: 8890 00:20:15.031 sock_completions: 2971 00:20:15.031 nvme_completions: 5913 00:20:15.031 submitted_requests: 9001 00:20:15.031 queued_requests: 1 00:20:15.031 ======================================================== 00:20:15.031 Latency(us) 00:20:15.031 Device Information : IOPS MiB/s Average min max 00:20:15.031 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1045.46 261.37 125135.70 79730.33 216324.58 00:20:15.031 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1538.27 384.57 84702.59 41106.75 117496.05 00:20:15.031 ======================================================== 00:20:15.031 Total : 2583.73 645.93 101063.17 41106.75 216324.58 00:20:15.031 00:20:15.031 22:40:15 -- host/perf.sh@66 -- # sync 00:20:15.031 22:40:15 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:15.290 22:40:15 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:15.290 22:40:15 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:20:15.290 22:40:15 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:15.548 22:40:16 -- host/perf.sh@72 -- # ls_guid=bcbd8a6f-4183-4931-8912-bdb115d5c227 00:20:15.548 22:40:16 -- host/perf.sh@73 -- # get_lvs_free_mb bcbd8a6f-4183-4931-8912-bdb115d5c227 00:20:15.548 22:40:16 -- common/autotest_common.sh@1353 -- # local lvs_uuid=bcbd8a6f-4183-4931-8912-bdb115d5c227 00:20:15.548 22:40:16 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:15.548 22:40:16 -- common/autotest_common.sh@1355 -- # local fc 00:20:15.548 22:40:16 -- common/autotest_common.sh@1356 -- # local cs 00:20:15.548 22:40:16 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:15.807 22:40:16 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:15.807 { 00:20:15.807 "base_bdev": "Nvme0n1", 00:20:15.807 "block_size": 4096, 00:20:15.807 "cluster_size": 4194304, 00:20:15.807 "free_clusters": 1278, 00:20:15.807 "name": "lvs_0", 00:20:15.807 "total_data_clusters": 1278, 00:20:15.807 "uuid": "bcbd8a6f-4183-4931-8912-bdb115d5c227" 00:20:15.807 } 00:20:15.807 ]' 00:20:15.807 22:40:16 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="bcbd8a6f-4183-4931-8912-bdb115d5c227") .free_clusters' 00:20:15.807 22:40:16 -- common/autotest_common.sh@1358 -- # fc=1278 00:20:15.807 22:40:16 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="bcbd8a6f-4183-4931-8912-bdb115d5c227") .cluster_size' 00:20:15.807 5112 00:20:15.807 22:40:16 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:15.807 22:40:16 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:20:15.807 22:40:16 -- common/autotest_common.sh@1363 -- # echo 5112 00:20:15.807 22:40:16 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:16.067 22:40:16 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bcbd8a6f-4183-4931-8912-bdb115d5c227 lbd_0 5112 00:20:16.326 22:40:16 -- host/perf.sh@80 -- # lb_guid=a49d3d44-c8ca-4827-b714-c4f7f43acbdb 00:20:16.326 22:40:16 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore a49d3d44-c8ca-4827-b714-c4f7f43acbdb lvs_n_0 00:20:16.585 22:40:17 -- host/perf.sh@83 -- # ls_nested_guid=6be385ae-9c74-42ec-9a08-c6ee3ee313fe 00:20:16.585 22:40:17 -- host/perf.sh@84 -- # get_lvs_free_mb 6be385ae-9c74-42ec-9a08-c6ee3ee313fe 00:20:16.585 22:40:17 -- common/autotest_common.sh@1353 -- # local lvs_uuid=6be385ae-9c74-42ec-9a08-c6ee3ee313fe 00:20:16.585 22:40:17 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:16.585 22:40:17 -- common/autotest_common.sh@1355 -- # local fc 00:20:16.585 22:40:17 -- common/autotest_common.sh@1356 -- # local cs 00:20:16.585 22:40:17 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:16.844 22:40:17 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:16.844 { 00:20:16.844 "base_bdev": "Nvme0n1", 00:20:16.844 "block_size": 4096, 00:20:16.844 "cluster_size": 4194304, 00:20:16.844 "free_clusters": 0, 00:20:16.844 "name": "lvs_0", 00:20:16.844 "total_data_clusters": 1278, 00:20:16.844 "uuid": "bcbd8a6f-4183-4931-8912-bdb115d5c227" 00:20:16.844 }, 00:20:16.844 { 00:20:16.844 "base_bdev": "a49d3d44-c8ca-4827-b714-c4f7f43acbdb", 00:20:16.844 "block_size": 4096, 00:20:16.844 "cluster_size": 4194304, 00:20:16.845 "free_clusters": 1276, 00:20:16.845 "name": "lvs_n_0", 00:20:16.845 "total_data_clusters": 1276, 00:20:16.845 "uuid": "6be385ae-9c74-42ec-9a08-c6ee3ee313fe" 00:20:16.845 } 00:20:16.845 ]' 00:20:16.845 22:40:17 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="6be385ae-9c74-42ec-9a08-c6ee3ee313fe") .free_clusters' 00:20:16.845 22:40:17 -- common/autotest_common.sh@1358 -- # fc=1276 00:20:16.845 22:40:17 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="6be385ae-9c74-42ec-9a08-c6ee3ee313fe") .cluster_size' 00:20:16.845 5104 00:20:16.845 22:40:17 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:16.845 22:40:17 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:20:16.845 22:40:17 -- common/autotest_common.sh@1363 -- # echo 5104 00:20:16.845 22:40:17 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:16.845 22:40:17 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6be385ae-9c74-42ec-9a08-c6ee3ee313fe lbd_nest_0 5104 00:20:17.103 22:40:17 -- host/perf.sh@88 -- # lb_nested_guid=54f9c173-9d68-49d6-81ea-367184dec04a 00:20:17.104 22:40:17 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:17.363 22:40:18 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:17.363 22:40:18 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 54f9c173-9d68-49d6-81ea-367184dec04a 00:20:17.622 22:40:18 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:17.880 22:40:18 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:17.880 22:40:18 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:17.880 22:40:18 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:17.880 22:40:18 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:17.880 22:40:18 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:18.139 No valid NVMe controllers or AIO or URING devices found 00:20:18.139 Initializing NVMe Controllers 00:20:18.139 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:18.139 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:18.139 WARNING: Some requested NVMe devices were skipped 00:20:18.139 22:40:18 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:18.139 22:40:18 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:30.347 Initializing NVMe Controllers 00:20:30.347 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:30.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:30.347 Initialization complete. Launching workers. 00:20:30.347 ======================================================== 00:20:30.347 Latency(us) 00:20:30.347 Device Information : IOPS MiB/s Average min max 00:20:30.347 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 861.24 107.65 1160.67 393.59 7628.21 00:20:30.347 ======================================================== 00:20:30.347 Total : 861.24 107.65 1160.67 393.59 7628.21 00:20:30.347 00:20:30.347 22:40:29 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:30.347 22:40:29 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:30.347 22:40:29 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:30.347 No valid NVMe controllers or AIO or URING devices found 00:20:30.347 Initializing NVMe Controllers 00:20:30.347 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:30.347 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:30.347 WARNING: Some requested NVMe devices were skipped 00:20:30.347 22:40:29 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:30.347 22:40:29 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:40.350 Initializing NVMe Controllers 00:20:40.350 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:40.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:40.350 Initialization complete. Launching workers. 00:20:40.350 ======================================================== 00:20:40.350 Latency(us) 00:20:40.350 Device Information : IOPS MiB/s Average min max 00:20:40.350 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1039.30 129.91 31200.58 7986.77 286853.88 00:20:40.350 ======================================================== 00:20:40.350 Total : 1039.30 129.91 31200.58 7986.77 286853.88 00:20:40.350 00:20:40.350 22:40:39 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:40.350 22:40:39 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:40.350 22:40:39 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:40.350 No valid NVMe controllers or AIO or URING devices found 00:20:40.350 Initializing NVMe Controllers 00:20:40.350 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:40.350 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:40.350 WARNING: Some requested NVMe devices were skipped 00:20:40.350 22:40:40 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:40.350 22:40:40 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:50.330 Initializing NVMe Controllers 00:20:50.330 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:50.330 Controller IO queue size 128, less than required. 00:20:50.330 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:50.330 Initialization complete. Launching workers. 00:20:50.330 ======================================================== 00:20:50.330 Latency(us) 00:20:50.330 Device Information : IOPS MiB/s Average min max 00:20:50.330 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4030.30 503.79 31779.58 10505.23 63049.72 00:20:50.330 ======================================================== 00:20:50.330 Total : 4030.30 503.79 31779.58 10505.23 63049.72 00:20:50.330 00:20:50.330 22:40:50 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:50.331 22:40:50 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 54f9c173-9d68-49d6-81ea-367184dec04a 00:20:50.331 22:40:51 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:20:50.590 22:40:51 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a49d3d44-c8ca-4827-b714-c4f7f43acbdb 00:20:50.848 22:40:51 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:20:51.107 22:40:51 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:51.107 22:40:51 -- host/perf.sh@114 -- # nvmftestfini 00:20:51.107 22:40:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:51.107 22:40:51 -- nvmf/common.sh@116 -- # sync 00:20:51.107 22:40:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:51.107 22:40:51 -- nvmf/common.sh@119 -- # set +e 00:20:51.107 22:40:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:51.107 22:40:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:51.107 rmmod nvme_tcp 00:20:51.107 rmmod nvme_fabrics 00:20:51.107 rmmod nvme_keyring 00:20:51.107 22:40:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:51.107 22:40:51 -- nvmf/common.sh@123 -- # set -e 00:20:51.107 22:40:51 -- nvmf/common.sh@124 -- # return 0 00:20:51.107 22:40:51 -- nvmf/common.sh@477 -- # '[' -n 93573 ']' 00:20:51.107 22:40:51 -- nvmf/common.sh@478 -- # killprocess 93573 00:20:51.107 22:40:51 -- common/autotest_common.sh@936 -- # '[' -z 93573 ']' 00:20:51.107 22:40:51 -- common/autotest_common.sh@940 -- # kill -0 93573 00:20:51.107 22:40:51 -- common/autotest_common.sh@941 -- # uname 00:20:51.107 22:40:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:51.107 22:40:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93573 00:20:51.107 killing process with pid 93573 00:20:51.107 22:40:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:51.107 22:40:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:51.107 22:40:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93573' 00:20:51.107 22:40:51 -- common/autotest_common.sh@955 -- # kill 93573 00:20:51.107 22:40:51 -- common/autotest_common.sh@960 -- # wait 93573 00:20:53.012 22:40:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:53.012 22:40:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:53.012 22:40:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:53.012 22:40:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:53.012 22:40:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:53.012 22:40:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.012 22:40:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:53.012 22:40:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.012 22:40:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:53.012 00:20:53.012 real 0m50.906s 00:20:53.012 user 3m12.339s 00:20:53.012 sys 0m9.918s 00:20:53.012 22:40:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:53.012 ************************************ 00:20:53.012 22:40:53 -- common/autotest_common.sh@10 -- # set +x 00:20:53.012 END TEST nvmf_perf 00:20:53.012 ************************************ 00:20:53.012 22:40:53 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:53.012 22:40:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:53.012 22:40:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:53.012 22:40:53 -- common/autotest_common.sh@10 -- # set +x 00:20:53.012 ************************************ 00:20:53.012 START TEST nvmf_fio_host 00:20:53.012 ************************************ 00:20:53.012 22:40:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:53.012 * Looking for test storage... 00:20:53.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:53.012 22:40:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:53.012 22:40:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:53.012 22:40:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:53.012 22:40:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:53.012 22:40:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:53.012 22:40:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:53.012 22:40:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:53.012 22:40:53 -- scripts/common.sh@335 -- # IFS=.-: 00:20:53.012 22:40:53 -- scripts/common.sh@335 -- # read -ra ver1 00:20:53.012 22:40:53 -- scripts/common.sh@336 -- # IFS=.-: 00:20:53.012 22:40:53 -- scripts/common.sh@336 -- # read -ra ver2 00:20:53.012 22:40:53 -- scripts/common.sh@337 -- # local 'op=<' 00:20:53.012 22:40:53 -- scripts/common.sh@339 -- # ver1_l=2 00:20:53.012 22:40:53 -- scripts/common.sh@340 -- # ver2_l=1 00:20:53.012 22:40:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:53.012 22:40:53 -- scripts/common.sh@343 -- # case "$op" in 00:20:53.012 22:40:53 -- scripts/common.sh@344 -- # : 1 00:20:53.012 22:40:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:53.012 22:40:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:53.012 22:40:53 -- scripts/common.sh@364 -- # decimal 1 00:20:53.012 22:40:53 -- scripts/common.sh@352 -- # local d=1 00:20:53.012 22:40:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:53.012 22:40:53 -- scripts/common.sh@354 -- # echo 1 00:20:53.012 22:40:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:53.012 22:40:53 -- scripts/common.sh@365 -- # decimal 2 00:20:53.012 22:40:53 -- scripts/common.sh@352 -- # local d=2 00:20:53.012 22:40:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:53.012 22:40:53 -- scripts/common.sh@354 -- # echo 2 00:20:53.012 22:40:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:53.012 22:40:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:53.012 22:40:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:53.012 22:40:53 -- scripts/common.sh@367 -- # return 0 00:20:53.012 22:40:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:53.012 22:40:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:53.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.012 --rc genhtml_branch_coverage=1 00:20:53.012 --rc genhtml_function_coverage=1 00:20:53.012 --rc genhtml_legend=1 00:20:53.012 --rc geninfo_all_blocks=1 00:20:53.012 --rc geninfo_unexecuted_blocks=1 00:20:53.012 00:20:53.012 ' 00:20:53.012 22:40:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:53.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.012 --rc genhtml_branch_coverage=1 00:20:53.012 --rc genhtml_function_coverage=1 00:20:53.012 --rc genhtml_legend=1 00:20:53.012 --rc geninfo_all_blocks=1 00:20:53.012 --rc geninfo_unexecuted_blocks=1 00:20:53.012 00:20:53.012 ' 00:20:53.012 22:40:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:53.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.012 --rc genhtml_branch_coverage=1 00:20:53.012 --rc genhtml_function_coverage=1 00:20:53.012 --rc genhtml_legend=1 00:20:53.012 --rc geninfo_all_blocks=1 00:20:53.012 --rc geninfo_unexecuted_blocks=1 00:20:53.012 00:20:53.012 ' 00:20:53.012 22:40:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:53.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.012 --rc genhtml_branch_coverage=1 00:20:53.012 --rc genhtml_function_coverage=1 00:20:53.012 --rc genhtml_legend=1 00:20:53.012 --rc geninfo_all_blocks=1 00:20:53.012 --rc geninfo_unexecuted_blocks=1 00:20:53.012 00:20:53.012 ' 00:20:53.012 22:40:53 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:53.012 22:40:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.012 22:40:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.012 22:40:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.012 22:40:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.012 22:40:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.012 22:40:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.012 22:40:53 -- paths/export.sh@5 -- # export PATH 00:20:53.012 22:40:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.012 22:40:53 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:53.012 22:40:53 -- nvmf/common.sh@7 -- # uname -s 00:20:53.013 22:40:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:53.013 22:40:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:53.013 22:40:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:53.013 22:40:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:53.013 22:40:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:53.013 22:40:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:53.013 22:40:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:53.013 22:40:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:53.013 22:40:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:53.013 22:40:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:53.272 22:40:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:20:53.272 22:40:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:20:53.272 22:40:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.272 22:40:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:53.272 22:40:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:53.272 22:40:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:53.272 22:40:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.272 22:40:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.272 22:40:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.272 22:40:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.272 22:40:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.272 22:40:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.272 22:40:53 -- paths/export.sh@5 -- # export PATH 00:20:53.272 22:40:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.272 22:40:53 -- nvmf/common.sh@46 -- # : 0 00:20:53.272 22:40:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:53.272 22:40:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:53.272 22:40:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:53.272 22:40:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.272 22:40:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.272 22:40:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:53.272 22:40:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:53.272 22:40:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:53.272 22:40:53 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:53.272 22:40:53 -- host/fio.sh@14 -- # nvmftestinit 00:20:53.272 22:40:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:53.272 22:40:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.272 22:40:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:53.272 22:40:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:53.272 22:40:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:53.272 22:40:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.272 22:40:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:53.272 22:40:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.272 22:40:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:53.272 22:40:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:53.272 22:40:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:53.272 22:40:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:53.272 22:40:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:53.272 22:40:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:53.272 22:40:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:53.272 22:40:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:53.272 22:40:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:53.272 22:40:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:53.272 22:40:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:53.272 22:40:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:53.272 22:40:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:53.272 22:40:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:53.272 22:40:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:53.272 22:40:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:53.272 22:40:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:53.272 22:40:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:53.272 22:40:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:53.272 22:40:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:53.272 Cannot find device "nvmf_tgt_br" 00:20:53.272 22:40:53 -- nvmf/common.sh@154 -- # true 00:20:53.272 22:40:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:53.272 Cannot find device "nvmf_tgt_br2" 00:20:53.272 22:40:53 -- nvmf/common.sh@155 -- # true 00:20:53.272 22:40:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:53.272 22:40:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:53.272 Cannot find device "nvmf_tgt_br" 00:20:53.272 22:40:53 -- nvmf/common.sh@157 -- # true 00:20:53.272 22:40:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:53.272 Cannot find device "nvmf_tgt_br2" 00:20:53.272 22:40:53 -- nvmf/common.sh@158 -- # true 00:20:53.272 22:40:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:53.272 22:40:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:53.272 22:40:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:53.272 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:53.272 22:40:53 -- nvmf/common.sh@161 -- # true 00:20:53.272 22:40:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:53.272 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:53.272 22:40:53 -- nvmf/common.sh@162 -- # true 00:20:53.272 22:40:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:53.272 22:40:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:53.272 22:40:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:53.272 22:40:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:53.272 22:40:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:53.272 22:40:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:53.272 22:40:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:53.272 22:40:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:53.272 22:40:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:53.272 22:40:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:53.272 22:40:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:53.272 22:40:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:53.272 22:40:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:53.272 22:40:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:53.272 22:40:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:53.272 22:40:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:53.272 22:40:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:53.531 22:40:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:53.531 22:40:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:53.531 22:40:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:53.531 22:40:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:53.531 22:40:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:53.531 22:40:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:53.531 22:40:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:53.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:20:53.531 00:20:53.531 --- 10.0.0.2 ping statistics --- 00:20:53.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.531 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:20:53.531 22:40:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:53.531 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:53.531 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.028 ms 00:20:53.531 00:20:53.531 --- 10.0.0.3 ping statistics --- 00:20:53.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.531 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:20:53.531 22:40:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:53.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:20:53.531 00:20:53.531 --- 10.0.0.1 ping statistics --- 00:20:53.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.531 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:20:53.531 22:40:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.531 22:40:54 -- nvmf/common.sh@421 -- # return 0 00:20:53.531 22:40:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:53.531 22:40:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:53.531 22:40:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:53.531 22:40:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:53.531 22:40:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:53.531 22:40:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:53.531 22:40:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:53.531 22:40:54 -- host/fio.sh@16 -- # [[ y != y ]] 00:20:53.531 22:40:54 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:53.531 22:40:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:53.531 22:40:54 -- common/autotest_common.sh@10 -- # set +x 00:20:53.531 22:40:54 -- host/fio.sh@24 -- # nvmfpid=94549 00:20:53.531 22:40:54 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:53.531 22:40:54 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:53.531 22:40:54 -- host/fio.sh@28 -- # waitforlisten 94549 00:20:53.531 22:40:54 -- common/autotest_common.sh@829 -- # '[' -z 94549 ']' 00:20:53.531 22:40:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.531 22:40:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:53.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.531 22:40:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.531 22:40:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:53.531 22:40:54 -- common/autotest_common.sh@10 -- # set +x 00:20:53.531 [2024-11-20 22:40:54.146606] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:53.531 [2024-11-20 22:40:54.146692] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.789 [2024-11-20 22:40:54.285525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:53.789 [2024-11-20 22:40:54.370452] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:53.789 [2024-11-20 22:40:54.370653] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.789 [2024-11-20 22:40:54.370676] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.789 [2024-11-20 22:40:54.370689] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.789 [2024-11-20 22:40:54.370856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.789 [2024-11-20 22:40:54.371468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.789 [2024-11-20 22:40:54.371607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:53.789 [2024-11-20 22:40:54.371615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.355 22:40:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:54.355 22:40:55 -- common/autotest_common.sh@862 -- # return 0 00:20:54.355 22:40:55 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:54.614 [2024-11-20 22:40:55.252838] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.614 22:40:55 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:54.614 22:40:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:54.614 22:40:55 -- common/autotest_common.sh@10 -- # set +x 00:20:54.614 22:40:55 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:55.181 Malloc1 00:20:55.181 22:40:55 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:55.181 22:40:55 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:55.440 22:40:56 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:55.698 [2024-11-20 22:40:56.264467] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.698 22:40:56 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:55.958 22:40:56 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:55.958 22:40:56 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:55.958 22:40:56 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:55.958 22:40:56 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:55.958 22:40:56 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:55.958 22:40:56 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:55.958 22:40:56 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:55.958 22:40:56 -- common/autotest_common.sh@1330 -- # shift 00:20:55.958 22:40:56 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:55.958 22:40:56 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:55.958 22:40:56 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:55.958 22:40:56 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:55.958 22:40:56 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:55.958 22:40:56 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:55.958 22:40:56 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:55.958 22:40:56 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:55.958 22:40:56 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:55.958 22:40:56 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:55.958 22:40:56 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:55.958 22:40:56 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:55.958 22:40:56 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:55.958 22:40:56 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:55.958 22:40:56 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:55.958 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:55.958 fio-3.35 00:20:55.958 Starting 1 thread 00:20:58.491 00:20:58.491 test: (groupid=0, jobs=1): err= 0: pid=94680: Wed Nov 20 22:40:58 2024 00:20:58.491 read: IOPS=11.2k, BW=43.6MiB/s (45.7MB/s)(87.4MiB/2005msec) 00:20:58.491 slat (nsec): min=1584, max=348740, avg=2269.78, stdev=3289.81 00:20:58.491 clat (usec): min=3060, max=13118, avg=6108.53, stdev=570.12 00:20:58.491 lat (usec): min=3103, max=13121, avg=6110.80, stdev=570.07 00:20:58.491 clat percentiles (usec): 00:20:58.491 | 1.00th=[ 5080], 5.00th=[ 5407], 10.00th=[ 5538], 20.00th=[ 5735], 00:20:58.491 | 30.00th=[ 5866], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6194], 00:20:58.491 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6718], 95.00th=[ 6849], 00:20:58.491 | 99.00th=[ 7373], 99.50th=[ 8979], 99.90th=[11731], 99.95th=[12911], 00:20:58.491 | 99.99th=[13042] 00:20:58.491 bw ( KiB/s): min=43296, max=45336, per=99.95%, avg=44590.00, stdev=905.74, samples=4 00:20:58.491 iops : min=10824, max=11334, avg=11147.50, stdev=226.44, samples=4 00:20:58.491 write: IOPS=11.1k, BW=43.4MiB/s (45.5MB/s)(87.0MiB/2005msec); 0 zone resets 00:20:58.491 slat (nsec): min=1624, max=233514, avg=2342.87, stdev=2149.05 00:20:58.491 clat (usec): min=2343, max=11731, avg=5334.19, stdev=486.97 00:20:58.491 lat (usec): min=2356, max=11734, avg=5336.54, stdev=487.00 00:20:58.491 clat percentiles (usec): 00:20:58.491 | 1.00th=[ 4490], 5.00th=[ 4752], 10.00th=[ 4817], 20.00th=[ 5014], 00:20:58.491 | 30.00th=[ 5145], 40.00th=[ 5211], 50.00th=[ 5342], 60.00th=[ 5407], 00:20:58.491 | 70.00th=[ 5538], 80.00th=[ 5604], 90.00th=[ 5800], 95.00th=[ 5932], 00:20:58.491 | 99.00th=[ 6325], 99.50th=[ 7767], 99.90th=[10814], 99.95th=[11207], 00:20:58.491 | 99.99th=[11600] 00:20:58.491 bw ( KiB/s): min=43688, max=45232, per=99.98%, avg=44444.00, stdev=632.03, samples=4 00:20:58.491 iops : min=10922, max=11308, avg=11111.00, stdev=158.01, samples=4 00:20:58.491 lat (msec) : 4=0.11%, 10=99.64%, 20=0.26% 00:20:58.491 cpu : usr=63.32%, sys=25.90%, ctx=331, majf=0, minf=5 00:20:58.491 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:58.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:58.491 issued rwts: total=22362,22282,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:58.491 00:20:58.491 Run status group 0 (all jobs): 00:20:58.491 READ: bw=43.6MiB/s (45.7MB/s), 43.6MiB/s-43.6MiB/s (45.7MB/s-45.7MB/s), io=87.4MiB (91.6MB), run=2005-2005msec 00:20:58.491 WRITE: bw=43.4MiB/s (45.5MB/s), 43.4MiB/s-43.4MiB/s (45.5MB/s-45.5MB/s), io=87.0MiB (91.3MB), run=2005-2005msec 00:20:58.491 22:40:58 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:58.491 22:40:58 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:58.491 22:40:58 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:58.491 22:40:58 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:58.491 22:40:58 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:58.491 22:40:58 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:58.491 22:40:58 -- common/autotest_common.sh@1330 -- # shift 00:20:58.491 22:40:58 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:58.491 22:40:58 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:58.491 22:40:58 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:58.491 22:40:58 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:58.491 22:40:58 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:58.491 22:40:59 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:58.491 22:40:59 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:58.491 22:40:59 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:58.491 22:40:59 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:58.491 22:40:59 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:58.491 22:40:59 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:58.491 22:40:59 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:58.491 22:40:59 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:58.491 22:40:59 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:58.491 22:40:59 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:58.491 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:58.491 fio-3.35 00:20:58.491 Starting 1 thread 00:21:01.022 00:21:01.022 test: (groupid=0, jobs=1): err= 0: pid=94723: Wed Nov 20 22:41:01 2024 00:21:01.022 read: IOPS=9248, BW=145MiB/s (152MB/s)(290MiB/2005msec) 00:21:01.022 slat (usec): min=2, max=107, avg= 3.23, stdev= 2.19 00:21:01.022 clat (usec): min=2399, max=18601, avg=8340.77, stdev=2196.75 00:21:01.022 lat (usec): min=2402, max=18605, avg=8344.00, stdev=2196.97 00:21:01.022 clat percentiles (usec): 00:21:01.022 | 1.00th=[ 4228], 5.00th=[ 5080], 10.00th=[ 5669], 20.00th=[ 6456], 00:21:01.022 | 30.00th=[ 7046], 40.00th=[ 7635], 50.00th=[ 8225], 60.00th=[ 8848], 00:21:01.022 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10945], 95.00th=[12256], 00:21:01.022 | 99.00th=[14877], 99.50th=[15533], 99.90th=[17433], 99.95th=[17957], 00:21:01.022 | 99.99th=[18482] 00:21:01.022 bw ( KiB/s): min=66944, max=77248, per=49.15%, avg=72737.50, stdev=4296.15, samples=4 00:21:01.022 iops : min= 4184, max= 4828, avg=4545.75, stdev=268.46, samples=4 00:21:01.022 write: IOPS=5427, BW=84.8MiB/s (88.9MB/s)(149MiB/1751msec); 0 zone resets 00:21:01.022 slat (usec): min=27, max=351, avg=33.14, stdev= 9.88 00:21:01.022 clat (usec): min=2988, max=19580, avg=9891.34, stdev=1947.91 00:21:01.022 lat (usec): min=3016, max=19609, avg=9924.48, stdev=1949.84 00:21:01.022 clat percentiles (usec): 00:21:01.022 | 1.00th=[ 6194], 5.00th=[ 7308], 10.00th=[ 7767], 20.00th=[ 8291], 00:21:01.022 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10028], 00:21:01.022 | 70.00th=[10683], 80.00th=[11469], 90.00th=[12518], 95.00th=[13435], 00:21:01.022 | 99.00th=[15008], 99.50th=[16057], 99.90th=[19006], 99.95th=[19268], 00:21:01.022 | 99.99th=[19530] 00:21:01.023 bw ( KiB/s): min=70400, max=81184, per=87.22%, avg=75749.75, stdev=4435.53, samples=4 00:21:01.023 iops : min= 4400, max= 5074, avg=4734.00, stdev=277.21, samples=4 00:21:01.023 lat (msec) : 4=0.52%, 10=73.45%, 20=26.04% 00:21:01.023 cpu : usr=70.56%, sys=19.16%, ctx=5, majf=0, minf=1 00:21:01.023 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:21:01.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:01.023 issued rwts: total=18544,9504,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:01.023 00:21:01.023 Run status group 0 (all jobs): 00:21:01.023 READ: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=290MiB (304MB), run=2005-2005msec 00:21:01.023 WRITE: bw=84.8MiB/s (88.9MB/s), 84.8MiB/s-84.8MiB/s (88.9MB/s-88.9MB/s), io=149MiB (156MB), run=1751-1751msec 00:21:01.023 22:41:01 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:01.023 22:41:01 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:21:01.023 22:41:01 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:21:01.023 22:41:01 -- host/fio.sh@51 -- # get_nvme_bdfs 00:21:01.023 22:41:01 -- common/autotest_common.sh@1508 -- # bdfs=() 00:21:01.023 22:41:01 -- common/autotest_common.sh@1508 -- # local bdfs 00:21:01.023 22:41:01 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:01.023 22:41:01 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:01.023 22:41:01 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:21:01.023 22:41:01 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:21:01.023 22:41:01 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:21:01.023 22:41:01 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:21:01.590 Nvme0n1 00:21:01.590 22:41:02 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:01.849 22:41:02 -- host/fio.sh@53 -- # ls_guid=2ea34397-3584-4cba-88be-2bac3a195d74 00:21:01.849 22:41:02 -- host/fio.sh@54 -- # get_lvs_free_mb 2ea34397-3584-4cba-88be-2bac3a195d74 00:21:01.849 22:41:02 -- common/autotest_common.sh@1353 -- # local lvs_uuid=2ea34397-3584-4cba-88be-2bac3a195d74 00:21:01.849 22:41:02 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:01.849 22:41:02 -- common/autotest_common.sh@1355 -- # local fc 00:21:01.849 22:41:02 -- common/autotest_common.sh@1356 -- # local cs 00:21:01.849 22:41:02 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:01.849 22:41:02 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:01.849 { 00:21:01.849 "base_bdev": "Nvme0n1", 00:21:01.849 "block_size": 4096, 00:21:01.849 "cluster_size": 1073741824, 00:21:01.849 "free_clusters": 4, 00:21:01.849 "name": "lvs_0", 00:21:01.849 "total_data_clusters": 4, 00:21:01.849 "uuid": "2ea34397-3584-4cba-88be-2bac3a195d74" 00:21:01.849 } 00:21:01.849 ]' 00:21:01.849 22:41:02 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="2ea34397-3584-4cba-88be-2bac3a195d74") .free_clusters' 00:21:02.107 22:41:02 -- common/autotest_common.sh@1358 -- # fc=4 00:21:02.107 22:41:02 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="2ea34397-3584-4cba-88be-2bac3a195d74") .cluster_size' 00:21:02.107 22:41:02 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:21:02.107 22:41:02 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:21:02.107 4096 00:21:02.107 22:41:02 -- common/autotest_common.sh@1363 -- # echo 4096 00:21:02.107 22:41:02 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:02.366 5fb99db2-ba58-41f5-bddc-0caec3983257 00:21:02.366 22:41:02 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:02.366 22:41:03 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:02.625 22:41:03 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:02.884 22:41:03 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:02.884 22:41:03 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:02.884 22:41:03 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:02.884 22:41:03 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:02.884 22:41:03 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:02.884 22:41:03 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:02.884 22:41:03 -- common/autotest_common.sh@1330 -- # shift 00:21:02.884 22:41:03 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:02.884 22:41:03 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:02.884 22:41:03 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:02.884 22:41:03 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:02.884 22:41:03 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:02.884 22:41:03 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:02.884 22:41:03 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:02.884 22:41:03 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:02.884 22:41:03 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:02.884 22:41:03 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:02.884 22:41:03 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:02.884 22:41:03 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:02.884 22:41:03 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:02.884 22:41:03 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:02.884 22:41:03 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:03.143 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:03.143 fio-3.35 00:21:03.143 Starting 1 thread 00:21:05.679 00:21:05.679 test: (groupid=0, jobs=1): err= 0: pid=94873: Wed Nov 20 22:41:05 2024 00:21:05.679 read: IOPS=8051, BW=31.5MiB/s (33.0MB/s)(63.1MiB/2007msec) 00:21:05.679 slat (nsec): min=1710, max=326181, avg=2680.86, stdev=3941.85 00:21:05.679 clat (usec): min=3432, max=13852, avg=8543.22, stdev=825.14 00:21:05.679 lat (usec): min=3441, max=13854, avg=8545.90, stdev=824.99 00:21:05.679 clat percentiles (usec): 00:21:05.679 | 1.00th=[ 6783], 5.00th=[ 7242], 10.00th=[ 7504], 20.00th=[ 7832], 00:21:05.679 | 30.00th=[ 8094], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8717], 00:21:05.679 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9634], 95.00th=[ 9896], 00:21:05.679 | 99.00th=[10552], 99.50th=[10814], 99.90th=[12387], 99.95th=[13435], 00:21:05.679 | 99.99th=[13829] 00:21:05.679 bw ( KiB/s): min=31152, max=32672, per=99.95%, avg=32190.00, stdev=715.31, samples=4 00:21:05.679 iops : min= 7788, max= 8168, avg=8047.50, stdev=178.83, samples=4 00:21:05.679 write: IOPS=8031, BW=31.4MiB/s (32.9MB/s)(63.0MiB/2007msec); 0 zone resets 00:21:05.679 slat (nsec): min=1803, max=237295, avg=2763.47, stdev=3014.25 00:21:05.679 clat (usec): min=2407, max=12427, avg=7304.46, stdev=700.87 00:21:05.679 lat (usec): min=2420, max=12429, avg=7307.23, stdev=700.84 00:21:05.679 clat percentiles (usec): 00:21:05.679 | 1.00th=[ 5735], 5.00th=[ 6194], 10.00th=[ 6456], 20.00th=[ 6718], 00:21:05.679 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7308], 60.00th=[ 7504], 00:21:05.679 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8160], 95.00th=[ 8455], 00:21:05.679 | 99.00th=[ 8848], 99.50th=[ 9110], 99.90th=[10945], 99.95th=[11600], 00:21:05.679 | 99.99th=[12387] 00:21:05.679 bw ( KiB/s): min=32008, max=32192, per=99.96%, avg=32114.00, stdev=86.26, samples=4 00:21:05.679 iops : min= 8002, max= 8048, avg=8028.50, stdev=21.56, samples=4 00:21:05.679 lat (msec) : 4=0.07%, 10=97.84%, 20=2.09% 00:21:05.679 cpu : usr=65.80%, sys=25.17%, ctx=8, majf=0, minf=5 00:21:05.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:05.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:05.679 issued rwts: total=16160,16119,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:05.679 00:21:05.679 Run status group 0 (all jobs): 00:21:05.679 READ: bw=31.5MiB/s (33.0MB/s), 31.5MiB/s-31.5MiB/s (33.0MB/s-33.0MB/s), io=63.1MiB (66.2MB), run=2007-2007msec 00:21:05.679 WRITE: bw=31.4MiB/s (32.9MB/s), 31.4MiB/s-31.4MiB/s (32.9MB/s-32.9MB/s), io=63.0MiB (66.0MB), run=2007-2007msec 00:21:05.679 22:41:05 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:05.679 22:41:06 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:05.938 22:41:06 -- host/fio.sh@64 -- # ls_nested_guid=3d51b03c-c2e3-4194-a7e4-d80cbfa4760f 00:21:05.938 22:41:06 -- host/fio.sh@65 -- # get_lvs_free_mb 3d51b03c-c2e3-4194-a7e4-d80cbfa4760f 00:21:05.938 22:41:06 -- common/autotest_common.sh@1353 -- # local lvs_uuid=3d51b03c-c2e3-4194-a7e4-d80cbfa4760f 00:21:05.938 22:41:06 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:05.938 22:41:06 -- common/autotest_common.sh@1355 -- # local fc 00:21:05.938 22:41:06 -- common/autotest_common.sh@1356 -- # local cs 00:21:05.938 22:41:06 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:06.198 22:41:06 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:06.198 { 00:21:06.198 "base_bdev": "Nvme0n1", 00:21:06.198 "block_size": 4096, 00:21:06.198 "cluster_size": 1073741824, 00:21:06.198 "free_clusters": 0, 00:21:06.198 "name": "lvs_0", 00:21:06.198 "total_data_clusters": 4, 00:21:06.198 "uuid": "2ea34397-3584-4cba-88be-2bac3a195d74" 00:21:06.198 }, 00:21:06.198 { 00:21:06.198 "base_bdev": "5fb99db2-ba58-41f5-bddc-0caec3983257", 00:21:06.198 "block_size": 4096, 00:21:06.198 "cluster_size": 4194304, 00:21:06.198 "free_clusters": 1022, 00:21:06.198 "name": "lvs_n_0", 00:21:06.198 "total_data_clusters": 1022, 00:21:06.198 "uuid": "3d51b03c-c2e3-4194-a7e4-d80cbfa4760f" 00:21:06.198 } 00:21:06.198 ]' 00:21:06.198 22:41:06 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="3d51b03c-c2e3-4194-a7e4-d80cbfa4760f") .free_clusters' 00:21:06.198 22:41:06 -- common/autotest_common.sh@1358 -- # fc=1022 00:21:06.198 22:41:06 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="3d51b03c-c2e3-4194-a7e4-d80cbfa4760f") .cluster_size' 00:21:06.198 22:41:06 -- common/autotest_common.sh@1359 -- # cs=4194304 00:21:06.198 22:41:06 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:21:06.198 4088 00:21:06.198 22:41:06 -- common/autotest_common.sh@1363 -- # echo 4088 00:21:06.198 22:41:06 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:06.457 5ef5abca-5ca0-437b-ac30-bcb255c11ac4 00:21:06.457 22:41:07 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:06.716 22:41:07 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:06.976 22:41:07 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:07.235 22:41:07 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:07.235 22:41:07 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:07.235 22:41:07 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:07.235 22:41:07 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:07.235 22:41:07 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:07.235 22:41:07 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:07.235 22:41:07 -- common/autotest_common.sh@1330 -- # shift 00:21:07.235 22:41:07 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:07.235 22:41:07 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:07.235 22:41:07 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:07.235 22:41:07 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:07.235 22:41:07 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:07.235 22:41:07 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:07.235 22:41:07 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:07.235 22:41:07 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:07.235 22:41:07 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:07.235 22:41:07 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:07.235 22:41:07 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:07.235 22:41:07 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:07.235 22:41:07 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:07.235 22:41:07 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:07.235 22:41:07 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:07.235 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:07.235 fio-3.35 00:21:07.235 Starting 1 thread 00:21:09.765 00:21:09.765 test: (groupid=0, jobs=1): err= 0: pid=94996: Wed Nov 20 22:41:10 2024 00:21:09.765 read: IOPS=5829, BW=22.8MiB/s (23.9MB/s)(45.7MiB/2009msec) 00:21:09.765 slat (nsec): min=1760, max=343602, avg=2923.41, stdev=4906.15 00:21:09.765 clat (usec): min=4651, max=19598, avg=11715.60, stdev=1117.93 00:21:09.765 lat (usec): min=4661, max=19601, avg=11718.52, stdev=1117.71 00:21:09.765 clat percentiles (usec): 00:21:09.765 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10421], 20.00th=[10814], 00:21:09.765 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11994], 00:21:09.765 | 70.00th=[12256], 80.00th=[12649], 90.00th=[13173], 95.00th=[13566], 00:21:09.765 | 99.00th=[14353], 99.50th=[14877], 99.90th=[17433], 99.95th=[19268], 00:21:09.765 | 99.99th=[19530] 00:21:09.765 bw ( KiB/s): min=22448, max=24048, per=99.88%, avg=23290.00, stdev=711.40, samples=4 00:21:09.765 iops : min= 5612, max= 6012, avg=5822.50, stdev=177.85, samples=4 00:21:09.765 write: IOPS=5813, BW=22.7MiB/s (23.8MB/s)(45.6MiB/2009msec); 0 zone resets 00:21:09.765 slat (nsec): min=1832, max=284322, avg=3011.01, stdev=3865.19 00:21:09.765 clat (usec): min=2605, max=19145, avg=10194.63, stdev=978.69 00:21:09.765 lat (usec): min=2619, max=19147, avg=10197.64, stdev=978.58 00:21:09.765 clat percentiles (usec): 00:21:09.765 | 1.00th=[ 8029], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9503], 00:21:09.765 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10421], 00:21:09.765 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11731], 00:21:09.765 | 99.00th=[12518], 99.50th=[12911], 99.90th=[17433], 99.95th=[17695], 00:21:09.765 | 99.99th=[19006] 00:21:09.765 bw ( KiB/s): min=22856, max=23488, per=99.93%, avg=23236.00, stdev=268.88, samples=4 00:21:09.765 iops : min= 5714, max= 5872, avg=5809.00, stdev=67.22, samples=4 00:21:09.765 lat (msec) : 4=0.03%, 10=23.39%, 20=76.58% 00:21:09.765 cpu : usr=72.86%, sys=20.22%, ctx=35, majf=0, minf=5 00:21:09.765 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:09.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:09.765 issued rwts: total=11711,11679,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.765 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.765 00:21:09.765 Run status group 0 (all jobs): 00:21:09.765 READ: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=45.7MiB (48.0MB), run=2009-2009msec 00:21:09.765 WRITE: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=45.6MiB (47.8MB), run=2009-2009msec 00:21:09.765 22:41:10 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:10.024 22:41:10 -- host/fio.sh@74 -- # sync 00:21:10.024 22:41:10 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:10.281 22:41:10 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:10.540 22:41:11 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:10.798 22:41:11 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:11.057 22:41:11 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:11.624 22:41:12 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:11.624 22:41:12 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:11.624 22:41:12 -- host/fio.sh@86 -- # nvmftestfini 00:21:11.624 22:41:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:11.624 22:41:12 -- nvmf/common.sh@116 -- # sync 00:21:11.624 22:41:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:11.624 22:41:12 -- nvmf/common.sh@119 -- # set +e 00:21:11.624 22:41:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:11.624 22:41:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:11.624 rmmod nvme_tcp 00:21:11.624 rmmod nvme_fabrics 00:21:11.624 rmmod nvme_keyring 00:21:11.624 22:41:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:11.624 22:41:12 -- nvmf/common.sh@123 -- # set -e 00:21:11.624 22:41:12 -- nvmf/common.sh@124 -- # return 0 00:21:11.624 22:41:12 -- nvmf/common.sh@477 -- # '[' -n 94549 ']' 00:21:11.624 22:41:12 -- nvmf/common.sh@478 -- # killprocess 94549 00:21:11.624 22:41:12 -- common/autotest_common.sh@936 -- # '[' -z 94549 ']' 00:21:11.624 22:41:12 -- common/autotest_common.sh@940 -- # kill -0 94549 00:21:11.624 22:41:12 -- common/autotest_common.sh@941 -- # uname 00:21:11.624 22:41:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:11.624 22:41:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94549 00:21:11.624 killing process with pid 94549 00:21:11.624 22:41:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:11.624 22:41:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:11.624 22:41:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94549' 00:21:11.624 22:41:12 -- common/autotest_common.sh@955 -- # kill 94549 00:21:11.624 22:41:12 -- common/autotest_common.sh@960 -- # wait 94549 00:21:11.882 22:41:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:11.883 22:41:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:11.883 22:41:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:11.883 22:41:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:11.883 22:41:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:11.883 22:41:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.883 22:41:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:11.883 22:41:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.883 22:41:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:11.883 00:21:11.883 real 0m19.024s 00:21:11.883 user 1m22.938s 00:21:11.883 sys 0m4.482s 00:21:11.883 22:41:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:11.883 ************************************ 00:21:11.883 END TEST nvmf_fio_host 00:21:11.883 22:41:12 -- common/autotest_common.sh@10 -- # set +x 00:21:11.883 ************************************ 00:21:11.883 22:41:12 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:11.883 22:41:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:11.883 22:41:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:11.883 22:41:12 -- common/autotest_common.sh@10 -- # set +x 00:21:12.142 ************************************ 00:21:12.142 START TEST nvmf_failover 00:21:12.142 ************************************ 00:21:12.142 22:41:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:12.142 * Looking for test storage... 00:21:12.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:12.142 22:41:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:12.142 22:41:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:12.142 22:41:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:12.142 22:41:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:12.142 22:41:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:12.142 22:41:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:12.142 22:41:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:12.142 22:41:12 -- scripts/common.sh@335 -- # IFS=.-: 00:21:12.142 22:41:12 -- scripts/common.sh@335 -- # read -ra ver1 00:21:12.142 22:41:12 -- scripts/common.sh@336 -- # IFS=.-: 00:21:12.142 22:41:12 -- scripts/common.sh@336 -- # read -ra ver2 00:21:12.142 22:41:12 -- scripts/common.sh@337 -- # local 'op=<' 00:21:12.142 22:41:12 -- scripts/common.sh@339 -- # ver1_l=2 00:21:12.142 22:41:12 -- scripts/common.sh@340 -- # ver2_l=1 00:21:12.142 22:41:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:12.142 22:41:12 -- scripts/common.sh@343 -- # case "$op" in 00:21:12.142 22:41:12 -- scripts/common.sh@344 -- # : 1 00:21:12.142 22:41:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:12.142 22:41:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:12.142 22:41:12 -- scripts/common.sh@364 -- # decimal 1 00:21:12.142 22:41:12 -- scripts/common.sh@352 -- # local d=1 00:21:12.142 22:41:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:12.142 22:41:12 -- scripts/common.sh@354 -- # echo 1 00:21:12.142 22:41:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:12.142 22:41:12 -- scripts/common.sh@365 -- # decimal 2 00:21:12.142 22:41:12 -- scripts/common.sh@352 -- # local d=2 00:21:12.142 22:41:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:12.142 22:41:12 -- scripts/common.sh@354 -- # echo 2 00:21:12.142 22:41:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:12.142 22:41:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:12.142 22:41:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:12.142 22:41:12 -- scripts/common.sh@367 -- # return 0 00:21:12.142 22:41:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:12.142 22:41:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:12.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.142 --rc genhtml_branch_coverage=1 00:21:12.142 --rc genhtml_function_coverage=1 00:21:12.142 --rc genhtml_legend=1 00:21:12.142 --rc geninfo_all_blocks=1 00:21:12.142 --rc geninfo_unexecuted_blocks=1 00:21:12.142 00:21:12.142 ' 00:21:12.142 22:41:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:12.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.142 --rc genhtml_branch_coverage=1 00:21:12.142 --rc genhtml_function_coverage=1 00:21:12.142 --rc genhtml_legend=1 00:21:12.142 --rc geninfo_all_blocks=1 00:21:12.142 --rc geninfo_unexecuted_blocks=1 00:21:12.142 00:21:12.142 ' 00:21:12.142 22:41:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:12.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.142 --rc genhtml_branch_coverage=1 00:21:12.142 --rc genhtml_function_coverage=1 00:21:12.142 --rc genhtml_legend=1 00:21:12.142 --rc geninfo_all_blocks=1 00:21:12.142 --rc geninfo_unexecuted_blocks=1 00:21:12.142 00:21:12.142 ' 00:21:12.142 22:41:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:12.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.142 --rc genhtml_branch_coverage=1 00:21:12.142 --rc genhtml_function_coverage=1 00:21:12.142 --rc genhtml_legend=1 00:21:12.142 --rc geninfo_all_blocks=1 00:21:12.142 --rc geninfo_unexecuted_blocks=1 00:21:12.142 00:21:12.142 ' 00:21:12.142 22:41:12 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:12.142 22:41:12 -- nvmf/common.sh@7 -- # uname -s 00:21:12.142 22:41:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:12.142 22:41:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:12.142 22:41:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:12.142 22:41:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:12.142 22:41:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:12.142 22:41:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:12.142 22:41:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:12.142 22:41:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:12.142 22:41:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:12.142 22:41:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:12.142 22:41:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:21:12.142 22:41:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:21:12.142 22:41:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:12.142 22:41:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:12.142 22:41:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:12.142 22:41:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:12.143 22:41:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:12.143 22:41:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:12.143 22:41:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:12.143 22:41:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.143 22:41:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.143 22:41:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.143 22:41:12 -- paths/export.sh@5 -- # export PATH 00:21:12.143 22:41:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.143 22:41:12 -- nvmf/common.sh@46 -- # : 0 00:21:12.143 22:41:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:12.143 22:41:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:12.143 22:41:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:12.143 22:41:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:12.143 22:41:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:12.143 22:41:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:12.143 22:41:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:12.143 22:41:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:12.143 22:41:12 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:12.143 22:41:12 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:12.143 22:41:12 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:12.143 22:41:12 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:12.143 22:41:12 -- host/failover.sh@18 -- # nvmftestinit 00:21:12.143 22:41:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:12.143 22:41:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:12.143 22:41:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:12.143 22:41:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:12.143 22:41:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:12.143 22:41:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.143 22:41:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:12.143 22:41:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.143 22:41:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:12.143 22:41:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:12.143 22:41:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:12.143 22:41:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:12.143 22:41:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:12.143 22:41:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:12.143 22:41:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.143 22:41:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.143 22:41:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:12.143 22:41:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:12.143 22:41:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:12.143 22:41:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:12.143 22:41:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:12.143 22:41:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.143 22:41:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:12.143 22:41:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:12.143 22:41:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:12.143 22:41:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:12.143 22:41:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:12.143 22:41:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:12.143 Cannot find device "nvmf_tgt_br" 00:21:12.143 22:41:12 -- nvmf/common.sh@154 -- # true 00:21:12.143 22:41:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:12.402 Cannot find device "nvmf_tgt_br2" 00:21:12.402 22:41:12 -- nvmf/common.sh@155 -- # true 00:21:12.402 22:41:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:12.402 22:41:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:12.402 Cannot find device "nvmf_tgt_br" 00:21:12.402 22:41:12 -- nvmf/common.sh@157 -- # true 00:21:12.402 22:41:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:12.402 Cannot find device "nvmf_tgt_br2" 00:21:12.402 22:41:12 -- nvmf/common.sh@158 -- # true 00:21:12.402 22:41:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:12.402 22:41:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:12.402 22:41:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:12.402 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:12.402 22:41:12 -- nvmf/common.sh@161 -- # true 00:21:12.402 22:41:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:12.402 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:12.402 22:41:12 -- nvmf/common.sh@162 -- # true 00:21:12.402 22:41:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:12.402 22:41:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:12.402 22:41:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:12.402 22:41:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:12.402 22:41:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:12.402 22:41:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:12.402 22:41:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:12.402 22:41:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:12.402 22:41:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:12.402 22:41:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:12.402 22:41:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:12.402 22:41:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:12.402 22:41:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:12.402 22:41:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:12.402 22:41:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:12.402 22:41:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:12.402 22:41:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:12.402 22:41:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:12.402 22:41:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:12.402 22:41:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:12.402 22:41:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:12.661 22:41:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:12.661 22:41:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:12.661 22:41:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:12.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:21:12.661 00:21:12.661 --- 10.0.0.2 ping statistics --- 00:21:12.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.661 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:21:12.661 22:41:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:12.661 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:12.661 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:21:12.661 00:21:12.661 --- 10.0.0.3 ping statistics --- 00:21:12.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.661 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:21:12.661 22:41:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:12.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:21:12.661 00:21:12.661 --- 10.0.0.1 ping statistics --- 00:21:12.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.661 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:21:12.661 22:41:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.661 22:41:13 -- nvmf/common.sh@421 -- # return 0 00:21:12.661 22:41:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:12.661 22:41:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.661 22:41:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:12.661 22:41:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:12.661 22:41:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.661 22:41:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:12.661 22:41:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:12.661 22:41:13 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:12.661 22:41:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:12.662 22:41:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:12.662 22:41:13 -- common/autotest_common.sh@10 -- # set +x 00:21:12.662 22:41:13 -- nvmf/common.sh@469 -- # nvmfpid=95275 00:21:12.662 22:41:13 -- nvmf/common.sh@470 -- # waitforlisten 95275 00:21:12.662 22:41:13 -- common/autotest_common.sh@829 -- # '[' -z 95275 ']' 00:21:12.662 22:41:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.662 22:41:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:12.662 22:41:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:12.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.662 22:41:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.662 22:41:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:12.662 22:41:13 -- common/autotest_common.sh@10 -- # set +x 00:21:12.662 [2024-11-20 22:41:13.248192] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:12.662 [2024-11-20 22:41:13.248294] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.662 [2024-11-20 22:41:13.386695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:12.920 [2024-11-20 22:41:13.446130] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:12.920 [2024-11-20 22:41:13.446268] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.920 [2024-11-20 22:41:13.446331] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.920 [2024-11-20 22:41:13.446341] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.920 [2024-11-20 22:41:13.446453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.920 [2024-11-20 22:41:13.446829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:12.920 [2024-11-20 22:41:13.446839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.488 22:41:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:13.488 22:41:14 -- common/autotest_common.sh@862 -- # return 0 00:21:13.488 22:41:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:13.488 22:41:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:13.488 22:41:14 -- common/autotest_common.sh@10 -- # set +x 00:21:13.488 22:41:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.488 22:41:14 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:13.749 [2024-11-20 22:41:14.478997] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.027 22:41:14 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:14.295 Malloc0 00:21:14.295 22:41:14 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:14.295 22:41:15 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:14.553 22:41:15 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:14.812 [2024-11-20 22:41:15.403718] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.812 22:41:15 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:15.071 [2024-11-20 22:41:15.603954] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:15.071 22:41:15 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:15.071 [2024-11-20 22:41:15.800444] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:15.330 22:41:15 -- host/failover.sh@31 -- # bdevperf_pid=95381 00:21:15.330 22:41:15 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:15.330 22:41:15 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:15.330 22:41:15 -- host/failover.sh@34 -- # waitforlisten 95381 /var/tmp/bdevperf.sock 00:21:15.330 22:41:15 -- common/autotest_common.sh@829 -- # '[' -z 95381 ']' 00:21:15.330 22:41:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:15.330 22:41:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:15.330 22:41:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:15.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:15.330 22:41:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:15.330 22:41:15 -- common/autotest_common.sh@10 -- # set +x 00:21:16.311 22:41:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:16.311 22:41:16 -- common/autotest_common.sh@862 -- # return 0 00:21:16.311 22:41:16 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:16.570 NVMe0n1 00:21:16.570 22:41:17 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:16.829 00:21:16.829 22:41:17 -- host/failover.sh@39 -- # run_test_pid=95434 00:21:16.829 22:41:17 -- host/failover.sh@41 -- # sleep 1 00:21:16.829 22:41:17 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:17.764 22:41:18 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:18.023 [2024-11-20 22:41:18.715786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.715836] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.715864] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.715872] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.715880] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.715887] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.715895] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.715902] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.715908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.715916] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.715922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.715929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.715936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.715942] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.715949] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.715956] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.715962] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.715969] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.715976] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.715991] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.715997] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.716004] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.716010] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.716017] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.716023] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.716030] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.716039] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.716047] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.716055] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.716062] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.716069] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.716076] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.716083] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.716089] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.716106] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.023 [2024-11-20 22:41:18.716115] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.024 [2024-11-20 22:41:18.716122] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.024 [2024-11-20 22:41:18.716129] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.024 [2024-11-20 22:41:18.716136] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.024 [2024-11-20 22:41:18.716143] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.024 [2024-11-20 22:41:18.716151] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.024 [2024-11-20 22:41:18.716158] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.024 [2024-11-20 22:41:18.716165] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.024 [2024-11-20 22:41:18.716171] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.024 [2024-11-20 22:41:18.716178] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.024 [2024-11-20 22:41:18.716185] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.024 [2024-11-20 22:41:18.716193] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.024 [2024-11-20 22:41:18.716200] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.024 [2024-11-20 22:41:18.716207] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.024 [2024-11-20 22:41:18.716214] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.024 [2024-11-20 22:41:18.716222] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.024 [2024-11-20 22:41:18.716228] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.024 [2024-11-20 22:41:18.716235] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.024 [2024-11-20 22:41:18.716242] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cab0 is same with the state(5) to be set 00:21:18.024 22:41:18 -- host/failover.sh@45 -- # sleep 3 00:21:21.311 22:41:21 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:21.570 00:21:21.570 22:41:22 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:21.570 [2024-11-20 22:41:22.272444] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272490] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272501] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272509] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272517] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272525] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272533] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272540] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272555] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272563] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272570] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272578] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272585] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272593] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272607] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272629] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272636] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272643] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272670] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272686] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272708] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272715] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272721] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272728] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272735] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272742] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272750] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272759] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272766] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272774] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272791] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272799] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272806] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 [2024-11-20 22:41:22.272813] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d920 is same with the state(5) to be set 00:21:21.570 22:41:22 -- host/failover.sh@50 -- # sleep 3 00:21:24.857 22:41:25 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:24.857 [2024-11-20 22:41:25.526179] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.857 22:41:25 -- host/failover.sh@55 -- # sleep 1 00:21:26.235 22:41:26 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:26.235 [2024-11-20 22:41:26.789533] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.790150] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.790253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.790380] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.790480] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.790558] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.790636] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.790752] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.790816] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.790922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.790991] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.791050] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.791129] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.791199] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.791269] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.791397] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.791459] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.791534] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.791602] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.791751] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.791818] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.791887] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.791949] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.792005] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.792062] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.792118] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.792221] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.792294] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.792421] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.792497] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.792572] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.792662] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.792766] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.792837] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.792908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.792971] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.793027] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.793086] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.793157] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.793227] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.793347] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.793417] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.793499] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 [2024-11-20 22:41:26.793569] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143f040 is same with the state(5) to be set 00:21:26.235 22:41:26 -- host/failover.sh@59 -- # wait 95434 00:21:32.803 0 00:21:32.803 22:41:32 -- host/failover.sh@61 -- # killprocess 95381 00:21:32.803 22:41:32 -- common/autotest_common.sh@936 -- # '[' -z 95381 ']' 00:21:32.803 22:41:32 -- common/autotest_common.sh@940 -- # kill -0 95381 00:21:32.803 22:41:32 -- common/autotest_common.sh@941 -- # uname 00:21:32.803 22:41:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:32.803 22:41:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95381 00:21:32.803 22:41:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:32.803 22:41:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:32.803 killing process with pid 95381 00:21:32.803 22:41:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95381' 00:21:32.803 22:41:32 -- common/autotest_common.sh@955 -- # kill 95381 00:21:32.803 22:41:32 -- common/autotest_common.sh@960 -- # wait 95381 00:21:32.803 22:41:32 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:32.803 [2024-11-20 22:41:15.870919] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:32.803 [2024-11-20 22:41:15.871017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95381 ] 00:21:32.803 [2024-11-20 22:41:16.009062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.803 [2024-11-20 22:41:16.085298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.803 Running I/O for 15 seconds... 00:21:32.803 [2024-11-20 22:41:18.716710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.803 [2024-11-20 22:41:18.716757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.803 [2024-11-20 22:41:18.716781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.803 [2024-11-20 22:41:18.716795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.803 [2024-11-20 22:41:18.716809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.803 [2024-11-20 22:41:18.716822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.803 [2024-11-20 22:41:18.716836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.803 [2024-11-20 22:41:18.716848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.803 [2024-11-20 22:41:18.716863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.803 [2024-11-20 22:41:18.716875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.803 [2024-11-20 22:41:18.716889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.803 [2024-11-20 22:41:18.716900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.803 [2024-11-20 22:41:18.716913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.803 [2024-11-20 22:41:18.716925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.803 [2024-11-20 22:41:18.716937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.803 [2024-11-20 22:41:18.716949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.803 [2024-11-20 22:41:18.716962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.803 [2024-11-20 22:41:18.716973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.803 [2024-11-20 22:41:18.716986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.803 [2024-11-20 22:41:18.716998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.803 [2024-11-20 22:41:18.717010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.803 [2024-11-20 22:41:18.717022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.803 [2024-11-20 22:41:18.717068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.803 [2024-11-20 22:41:18.717081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.803 [2024-11-20 22:41:18.717094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.803 [2024-11-20 22:41:18.717106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.803 [2024-11-20 22:41:18.717118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.803 [2024-11-20 22:41:18.717130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.803 [2024-11-20 22:41:18.717143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.803 [2024-11-20 22:41:18.717154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.803 [2024-11-20 22:41:18.717168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.803 [2024-11-20 22:41:18.717180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.803 [2024-11-20 22:41:18.717192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.803 [2024-11-20 22:41:18.717210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.803 [2024-11-20 22:41:18.717239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.803 [2024-11-20 22:41:18.717251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.803 [2024-11-20 22:41:18.717265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.717277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.717300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.717338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.717353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.717365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.717378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.717389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.717402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.717414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.717427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.717447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.717462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.717474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.717488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.717499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.717512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.717524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.717552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.717581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.717595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.717607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.717621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.717633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.717652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.717712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.717727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.717739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.717753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.717772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.717788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.717801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.717815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.717827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.717842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.717854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.717868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.717889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.717904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.717917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.717931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.717943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.717960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.717982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.717996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.718009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.718022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.718035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.718049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.718062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.718076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.804 [2024-11-20 22:41:18.718088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.718103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.804 [2024-11-20 22:41:18.718116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.718132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.718145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.718159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.718171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.718185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.718197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.718211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.718229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.718251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.804 [2024-11-20 22:41:18.718264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.718310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.718324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.718338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.718350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.718365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.718378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.718391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.718404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.718433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.718445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.718458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.718470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.718484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.718495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.718508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.804 [2024-11-20 22:41:18.718520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.804 [2024-11-20 22:41:18.718533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.805 [2024-11-20 22:41:18.718545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.718558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.805 [2024-11-20 22:41:18.718570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.718584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.805 [2024-11-20 22:41:18.718596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.718631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.805 [2024-11-20 22:41:18.718650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.718672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.805 [2024-11-20 22:41:18.718683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.718696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.805 [2024-11-20 22:41:18.718708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.718720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.805 [2024-11-20 22:41:18.718747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.718762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.805 [2024-11-20 22:41:18.718774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.718788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.805 [2024-11-20 22:41:18.718799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.718812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.805 [2024-11-20 22:41:18.718823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.718836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.805 [2024-11-20 22:41:18.718847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.718860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.805 [2024-11-20 22:41:18.718871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.718885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.805 [2024-11-20 22:41:18.718897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.718910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.805 [2024-11-20 22:41:18.718921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.718934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.805 [2024-11-20 22:41:18.718945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.718957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.805 [2024-11-20 22:41:18.718969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.718981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.805 [2024-11-20 22:41:18.719000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.719013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.805 [2024-11-20 22:41:18.719025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.719037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.805 [2024-11-20 22:41:18.719048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.719061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.805 [2024-11-20 22:41:18.719072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.719084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.805 [2024-11-20 22:41:18.719096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.719108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.805 [2024-11-20 22:41:18.719119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.719131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.805 [2024-11-20 22:41:18.719154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.719168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.805 [2024-11-20 22:41:18.719180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.719193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.805 [2024-11-20 22:41:18.719204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.719217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.805 [2024-11-20 22:41:18.719228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.719241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.805 [2024-11-20 22:41:18.719252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.719265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.805 [2024-11-20 22:41:18.719303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.719318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.805 [2024-11-20 22:41:18.719331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.719358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.805 [2024-11-20 22:41:18.719370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.719384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.805 [2024-11-20 22:41:18.719395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.719408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.805 [2024-11-20 22:41:18.719420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.719436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.805 [2024-11-20 22:41:18.719448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.719461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.805 [2024-11-20 22:41:18.719473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.719486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.805 [2024-11-20 22:41:18.719513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.719543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.805 [2024-11-20 22:41:18.719555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.719569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.805 [2024-11-20 22:41:18.719582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.719596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.805 [2024-11-20 22:41:18.719609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.719622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.805 [2024-11-20 22:41:18.719656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.719669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.805 [2024-11-20 22:41:18.719681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.805 [2024-11-20 22:41:18.719695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.806 [2024-11-20 22:41:18.719707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.719721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-20 22:41:18.719739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.719753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.806 [2024-11-20 22:41:18.719766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.719780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.806 [2024-11-20 22:41:18.719793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.719806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-20 22:41:18.719819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.719848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.806 [2024-11-20 22:41:18.719861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.719888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.806 [2024-11-20 22:41:18.719899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.719912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.806 [2024-11-20 22:41:18.719923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.719936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.806 [2024-11-20 22:41:18.719947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.719967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.806 [2024-11-20 22:41:18.719978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.719991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-20 22:41:18.720002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.720015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-20 22:41:18.720026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.720039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-20 22:41:18.720050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.720062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-20 22:41:18.720073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.720093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-20 22:41:18.720110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.720124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-20 22:41:18.720135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.720148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-20 22:41:18.720160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.720172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-20 22:41:18.720183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.720196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.806 [2024-11-20 22:41:18.720207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.720220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-20 22:41:18.720232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.720244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.806 [2024-11-20 22:41:18.720255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.720268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-20 22:41:18.720280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.720299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-20 22:41:18.720310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.720325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-20 22:41:18.720336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.720374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-20 22:41:18.720389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.720402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-20 22:41:18.720414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.720427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-20 22:41:18.720439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.720459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-20 22:41:18.720472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.720485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-20 22:41:18.720496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.720508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a40d40 is same with the state(5) to be set 00:21:32.806 [2024-11-20 22:41:18.720523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:32.806 [2024-11-20 22:41:18.720532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:32.806 [2024-11-20 22:41:18.720547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7616 len:8 PRP1 0x0 PRP2 0x0 00:21:32.806 [2024-11-20 22:41:18.720558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.720613] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a40d40 was disconnected and freed. reset controller. 00:21:32.806 [2024-11-20 22:41:18.720643] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:32.806 [2024-11-20 22:41:18.720728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:32.806 [2024-11-20 22:41:18.720746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.720759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:32.806 [2024-11-20 22:41:18.720771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.720783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:32.806 [2024-11-20 22:41:18.720794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.720806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:32.806 [2024-11-20 22:41:18.720817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:18.720828] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:32.806 [2024-11-20 22:41:18.720878] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0e940 (9): Bad file descriptor 00:21:32.806 [2024-11-20 22:41:18.722723] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:32.806 [2024-11-20 22:41:18.744587] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:32.806 [2024-11-20 22:41:22.271135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:32.806 [2024-11-20 22:41:22.271191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:22.271220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:32.806 [2024-11-20 22:41:22.271233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.806 [2024-11-20 22:41:22.271266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:32.807 [2024-11-20 22:41:22.271325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.271339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:32.807 [2024-11-20 22:41:22.271350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.271362] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0e940 is same with the state(5) to be set 00:21:32.807 [2024-11-20 22:41:22.272902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.272930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.272952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.272966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.272981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.272994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.273020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.273046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:57384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.273073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.273098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.273125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.273151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.273177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.273219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.273246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:57952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.273272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.273334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.273364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:58008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.273405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.273433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.807 [2024-11-20 22:41:22.273460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.807 [2024-11-20 22:41:22.273486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:58048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.273512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:58056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.273538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.273563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.807 [2024-11-20 22:41:22.273598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.273647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.273693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:57464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.273722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.273748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.273774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.273800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.273825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.807 [2024-11-20 22:41:22.273839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-20 22:41:22.273852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.273865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:57544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-20 22:41:22.273877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.273891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:57568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-20 22:41:22.273911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.273925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-20 22:41:22.273938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.273952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-20 22:41:22.273963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.273984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-20 22:41:22.274018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-20 22:41:22.274044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-20 22:41:22.274070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-20 22:41:22.274095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:57728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-20 22:41:22.274120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:58088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-20 22:41:22.274146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-20 22:41:22.274171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-20 22:41:22.274196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.808 [2024-11-20 22:41:22.274220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.808 [2024-11-20 22:41:22.274245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.808 [2024-11-20 22:41:22.274269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.808 [2024-11-20 22:41:22.274317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-20 22:41:22.274355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:58152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-20 22:41:22.274394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.808 [2024-11-20 22:41:22.274421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.808 [2024-11-20 22:41:22.274446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:58176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-20 22:41:22.274471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-20 22:41:22.274497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.808 [2024-11-20 22:41:22.274522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.808 [2024-11-20 22:41:22.274548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.808 [2024-11-20 22:41:22.274574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.808 [2024-11-20 22:41:22.274600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-20 22:41:22.274644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-20 22:41:22.274680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:58240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-20 22:41:22.274705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.808 [2024-11-20 22:41:22.274736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.808 [2024-11-20 22:41:22.274762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.808 [2024-11-20 22:41:22.274787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.808 [2024-11-20 22:41:22.274812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.808 [2024-11-20 22:41:22.274843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:58288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-20 22:41:22.274869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.808 [2024-11-20 22:41:22.274895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.808 [2024-11-20 22:41:22.274920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.808 [2024-11-20 22:41:22.274945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:58320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-20 22:41:22.274969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.808 [2024-11-20 22:41:22.274982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:58328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.809 [2024-11-20 22:41:22.274994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.809 [2024-11-20 22:41:22.275019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.809 [2024-11-20 22:41:22.275043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.809 [2024-11-20 22:41:22.275076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.809 [2024-11-20 22:41:22.275101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.809 [2024-11-20 22:41:22.275126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:58376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.809 [2024-11-20 22:41:22.275151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.809 [2024-11-20 22:41:22.275175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.809 [2024-11-20 22:41:22.275201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:58400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.809 [2024-11-20 22:41:22.275225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.809 [2024-11-20 22:41:22.275254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.809 [2024-11-20 22:41:22.275280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.809 [2024-11-20 22:41:22.275351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.809 [2024-11-20 22:41:22.275378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.809 [2024-11-20 22:41:22.275405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.809 [2024-11-20 22:41:22.275437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.809 [2024-11-20 22:41:22.275465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.809 [2024-11-20 22:41:22.275489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.809 [2024-11-20 22:41:22.275515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.809 [2024-11-20 22:41:22.275540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:58488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.809 [2024-11-20 22:41:22.275565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.809 [2024-11-20 22:41:22.275591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:57736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.809 [2024-11-20 22:41:22.275616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.809 [2024-11-20 22:41:22.275642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.809 [2024-11-20 22:41:22.275683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.809 [2024-11-20 22:41:22.275708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.809 [2024-11-20 22:41:22.275738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:57800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.809 [2024-11-20 22:41:22.275763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.809 [2024-11-20 22:41:22.275795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.809 [2024-11-20 22:41:22.275821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.809 [2024-11-20 22:41:22.275846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.809 [2024-11-20 22:41:22.275872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.809 [2024-11-20 22:41:22.275897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:57856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.809 [2024-11-20 22:41:22.275922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.809 [2024-11-20 22:41:22.275950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.809 [2024-11-20 22:41:22.275975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.275988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.809 [2024-11-20 22:41:22.276000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.276013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:57912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.809 [2024-11-20 22:41:22.276025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.276038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.809 [2024-11-20 22:41:22.276051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.276064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.809 [2024-11-20 22:41:22.276077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.809 [2024-11-20 22:41:22.276090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.809 [2024-11-20 22:41:22.276102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:22.276121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.810 [2024-11-20 22:41:22.276134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:22.276147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:22.276165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:22.276179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:22.276191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:22.276204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:58552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:22.276216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:22.276229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.810 [2024-11-20 22:41:22.276241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:22.276254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:58568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:22.276266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:22.276279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.810 [2024-11-20 22:41:22.276305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:22.276323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.810 [2024-11-20 22:41:22.276335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:22.276349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:22.276361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:22.276373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:57920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:22.276386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:22.276399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:57928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:22.276411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:22.276424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:22.276435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:22.276448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:22.276467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:22.276481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:22.276493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:22.276507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:22.276519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:22.276533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:58000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:22.276545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:22.276558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1ad90 is same with the state(5) to be set 00:21:32.810 [2024-11-20 22:41:22.276572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:32.810 [2024-11-20 22:41:22.276582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:32.810 [2024-11-20 22:41:22.276596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58024 len:8 PRP1 0x0 PRP2 0x0 00:21:32.810 [2024-11-20 22:41:22.276608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:22.276695] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a1ad90 was disconnected and freed. reset controller. 00:21:32.810 [2024-11-20 22:41:22.276710] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:32.810 [2024-11-20 22:41:22.276723] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:32.810 [2024-11-20 22:41:22.278768] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:32.810 [2024-11-20 22:41:22.278801] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0e940 (9): Bad file descriptor 00:21:32.810 [2024-11-20 22:41:22.301237] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:32.810 [2024-11-20 22:41:26.793812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:26.793892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:26.793936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:26.793953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:26.793969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:26.793987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:26.794021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:26.794034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:26.794049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:26.794108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:26.794123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:26.794147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:26.794160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:26.794174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:26.794188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:26.794200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:26.794214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:26.794226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:26.794240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:26.794252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:26.794265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:26.794277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:26.794301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:26.794313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:26.794343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:26.794357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:26.794370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:26.794382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:26.794395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:26.794407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:26.794421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:26.794433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:26.794446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:26.794457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:26.794484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:26.794497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.810 [2024-11-20 22:41:26.794511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.810 [2024-11-20 22:41:26.794523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.794536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.811 [2024-11-20 22:41:26.794549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.794564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.811 [2024-11-20 22:41:26.794578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.794591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.811 [2024-11-20 22:41:26.794606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.794619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.811 [2024-11-20 22:41:26.794631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.794645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.811 [2024-11-20 22:41:26.794657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.794671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.811 [2024-11-20 22:41:26.794694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.794707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.811 [2024-11-20 22:41:26.794720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.794733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.811 [2024-11-20 22:41:26.794745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.794758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.811 [2024-11-20 22:41:26.794770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.794784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.811 [2024-11-20 22:41:26.794796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.794809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.811 [2024-11-20 22:41:26.794827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.794843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.811 [2024-11-20 22:41:26.794855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.794869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.811 [2024-11-20 22:41:26.794881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.794894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.811 [2024-11-20 22:41:26.794906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.794920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.811 [2024-11-20 22:41:26.794932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.794945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.811 [2024-11-20 22:41:26.794957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.794971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.811 [2024-11-20 22:41:26.794983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.794997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.811 [2024-11-20 22:41:26.795009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.795022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.811 [2024-11-20 22:41:26.795035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.795049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.811 [2024-11-20 22:41:26.795062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.795076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.811 [2024-11-20 22:41:26.795088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.795102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.811 [2024-11-20 22:41:26.795114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.795127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.811 [2024-11-20 22:41:26.795140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.795154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.811 [2024-11-20 22:41:26.795174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.795188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.811 [2024-11-20 22:41:26.795201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.795215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.811 [2024-11-20 22:41:26.795227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.795240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.811 [2024-11-20 22:41:26.795252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.795265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.811 [2024-11-20 22:41:26.795301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.795316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.811 [2024-11-20 22:41:26.795328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.795342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.811 [2024-11-20 22:41:26.795354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.795368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.811 [2024-11-20 22:41:26.795381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.795394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.811 [2024-11-20 22:41:26.795406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.795419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.811 [2024-11-20 22:41:26.795432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.795446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.811 [2024-11-20 22:41:26.795457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.795471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.811 [2024-11-20 22:41:26.795484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.795499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.811 [2024-11-20 22:41:26.795512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.795533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.811 [2024-11-20 22:41:26.795547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.795560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.811 [2024-11-20 22:41:26.795572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.795586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.811 [2024-11-20 22:41:26.795598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.811 [2024-11-20 22:41:26.795611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.811 [2024-11-20 22:41:26.795623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.795636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.812 [2024-11-20 22:41:26.795649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.795671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.812 [2024-11-20 22:41:26.795683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.795696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.812 [2024-11-20 22:41:26.795709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.795722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.812 [2024-11-20 22:41:26.795735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.795748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.812 [2024-11-20 22:41:26.795761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.795775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.812 [2024-11-20 22:41:26.795787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.795800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.812 [2024-11-20 22:41:26.795813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.795826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.812 [2024-11-20 22:41:26.795839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.795852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.812 [2024-11-20 22:41:26.795872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.795887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.812 [2024-11-20 22:41:26.795899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.795913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.812 [2024-11-20 22:41:26.795925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.795940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.812 [2024-11-20 22:41:26.795953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.795966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.812 [2024-11-20 22:41:26.795978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.795992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.812 [2024-11-20 22:41:26.796004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.796018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.812 [2024-11-20 22:41:26.796030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.796043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.812 [2024-11-20 22:41:26.796056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.796069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.812 [2024-11-20 22:41:26.796081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.796094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.812 [2024-11-20 22:41:26.796106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.796120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.812 [2024-11-20 22:41:26.796132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.796145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.812 [2024-11-20 22:41:26.796157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.796170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.812 [2024-11-20 22:41:26.796182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.796202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.812 [2024-11-20 22:41:26.796216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.796229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.812 [2024-11-20 22:41:26.796241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.796255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.812 [2024-11-20 22:41:26.796267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.796291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.812 [2024-11-20 22:41:26.796316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.796331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.812 [2024-11-20 22:41:26.796343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.796357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.812 [2024-11-20 22:41:26.796369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.796384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.812 [2024-11-20 22:41:26.796396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.796411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.812 [2024-11-20 22:41:26.796422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.796437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.812 [2024-11-20 22:41:26.796449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.796463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.812 [2024-11-20 22:41:26.796474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.796487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.812 [2024-11-20 22:41:26.796500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.796513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.812 [2024-11-20 22:41:26.796525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.796539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.812 [2024-11-20 22:41:26.796558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.812 [2024-11-20 22:41:26.796572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.813 [2024-11-20 22:41:26.796584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.796598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.813 [2024-11-20 22:41:26.796609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.796622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.813 [2024-11-20 22:41:26.796634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.796647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.813 [2024-11-20 22:41:26.796659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.796673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.813 [2024-11-20 22:41:26.796685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.796698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.813 [2024-11-20 22:41:26.796711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.796725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.813 [2024-11-20 22:41:26.796739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.796753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.813 [2024-11-20 22:41:26.796766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.796779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.813 [2024-11-20 22:41:26.796792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.796805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.813 [2024-11-20 22:41:26.796817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.796830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.813 [2024-11-20 22:41:26.796842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.796856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.813 [2024-11-20 22:41:26.796869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.796888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.813 [2024-11-20 22:41:26.796901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.796914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.813 [2024-11-20 22:41:26.796927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.796953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.813 [2024-11-20 22:41:26.796965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.796979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.813 [2024-11-20 22:41:26.796990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.797004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.813 [2024-11-20 22:41:26.797016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.797030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.813 [2024-11-20 22:41:26.797042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.797055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.813 [2024-11-20 22:41:26.797068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.797081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.813 [2024-11-20 22:41:26.797094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.797107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.813 [2024-11-20 22:41:26.797120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.797133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.813 [2024-11-20 22:41:26.797146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.797159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.813 [2024-11-20 22:41:26.797171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.797185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.813 [2024-11-20 22:41:26.797197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.797211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.813 [2024-11-20 22:41:26.797230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.797245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.813 [2024-11-20 22:41:26.797257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.797270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.813 [2024-11-20 22:41:26.797294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.797309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.813 [2024-11-20 22:41:26.797321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.797334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.813 [2024-11-20 22:41:26.797346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.797361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.813 [2024-11-20 22:41:26.797373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.797387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.813 [2024-11-20 22:41:26.797399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.797412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.813 [2024-11-20 22:41:26.797424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.797438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.813 [2024-11-20 22:41:26.797450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.797463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.813 [2024-11-20 22:41:26.797475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.797489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41ef0 is same with the state(5) to be set 00:21:32.813 [2024-11-20 22:41:26.797504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:32.813 [2024-11-20 22:41:26.797513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:32.813 [2024-11-20 22:41:26.797523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101584 len:8 PRP1 0x0 PRP2 0x0 00:21:32.813 [2024-11-20 22:41:26.797542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.797598] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a41ef0 was disconnected and freed. reset controller. 00:21:32.813 [2024-11-20 22:41:26.797614] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:32.813 [2024-11-20 22:41:26.797665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:32.813 [2024-11-20 22:41:26.797740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.797756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:32.813 [2024-11-20 22:41:26.797769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.797783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:32.813 [2024-11-20 22:41:26.797795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.813 [2024-11-20 22:41:26.797808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:32.814 [2024-11-20 22:41:26.797820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.814 [2024-11-20 22:41:26.797833] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:32.814 [2024-11-20 22:41:26.797879] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0e940 (9): Bad file descriptor 00:21:32.814 [2024-11-20 22:41:26.800022] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:32.814 [2024-11-20 22:41:26.818276] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:32.814 00:21:32.814 Latency(us) 00:21:32.814 [2024-11-20T22:41:33.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.814 [2024-11-20T22:41:33.548Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:32.814 Verification LBA range: start 0x0 length 0x4000 00:21:32.814 NVMe0n1 : 15.01 15018.41 58.67 258.37 0.00 8364.36 390.98 14656.23 00:21:32.814 [2024-11-20T22:41:33.548Z] =================================================================================================================== 00:21:32.814 [2024-11-20T22:41:33.548Z] Total : 15018.41 58.67 258.37 0.00 8364.36 390.98 14656.23 00:21:32.814 Received shutdown signal, test time was about 15.000000 seconds 00:21:32.814 00:21:32.814 Latency(us) 00:21:32.814 [2024-11-20T22:41:33.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.814 [2024-11-20T22:41:33.548Z] =================================================================================================================== 00:21:32.814 [2024-11-20T22:41:33.548Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:32.814 22:41:32 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:32.814 22:41:32 -- host/failover.sh@65 -- # count=3 00:21:32.814 22:41:32 -- host/failover.sh@67 -- # (( count != 3 )) 00:21:32.814 22:41:32 -- host/failover.sh@73 -- # bdevperf_pid=95638 00:21:32.814 22:41:32 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:32.814 22:41:32 -- host/failover.sh@75 -- # waitforlisten 95638 /var/tmp/bdevperf.sock 00:21:32.814 22:41:32 -- common/autotest_common.sh@829 -- # '[' -z 95638 ']' 00:21:32.814 22:41:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:32.814 22:41:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:32.814 22:41:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:32.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:32.814 22:41:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:32.814 22:41:32 -- common/autotest_common.sh@10 -- # set +x 00:21:33.381 22:41:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:33.381 22:41:33 -- common/autotest_common.sh@862 -- # return 0 00:21:33.381 22:41:33 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:33.639 [2024-11-20 22:41:34.150996] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:33.639 22:41:34 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:33.896 [2024-11-20 22:41:34.419303] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:33.896 22:41:34 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:34.155 NVMe0n1 00:21:34.155 22:41:34 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:34.413 00:21:34.413 22:41:35 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:34.672 00:21:34.672 22:41:35 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:34.672 22:41:35 -- host/failover.sh@82 -- # grep -q NVMe0 00:21:34.930 22:41:35 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:35.189 22:41:35 -- host/failover.sh@87 -- # sleep 3 00:21:38.476 22:41:38 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:38.476 22:41:38 -- host/failover.sh@88 -- # grep -q NVMe0 00:21:38.476 22:41:39 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:38.476 22:41:39 -- host/failover.sh@90 -- # run_test_pid=95775 00:21:38.476 22:41:39 -- host/failover.sh@92 -- # wait 95775 00:21:39.850 0 00:21:39.850 22:41:40 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:39.850 [2024-11-20 22:41:32.949485] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:39.850 [2024-11-20 22:41:32.950131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95638 ] 00:21:39.850 [2024-11-20 22:41:33.087581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.850 [2024-11-20 22:41:33.157340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.850 [2024-11-20 22:41:35.757867] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:39.850 [2024-11-20 22:41:35.758494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.850 [2024-11-20 22:41:35.758612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.850 [2024-11-20 22:41:35.758713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.850 [2024-11-20 22:41:35.758789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.850 [2024-11-20 22:41:35.758851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.850 [2024-11-20 22:41:35.758917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.850 [2024-11-20 22:41:35.758978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.850 [2024-11-20 22:41:35.759043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.850 [2024-11-20 22:41:35.759103] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:39.850 [2024-11-20 22:41:35.759209] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:39.850 [2024-11-20 22:41:35.759346] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x137d940 (9): Bad file descriptor 00:21:39.850 [2024-11-20 22:41:35.769834] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:39.850 Running I/O for 1 seconds... 00:21:39.850 00:21:39.850 Latency(us) 00:21:39.850 [2024-11-20T22:41:40.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.850 [2024-11-20T22:41:40.584Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:39.850 Verification LBA range: start 0x0 length 0x4000 00:21:39.850 NVMe0n1 : 1.01 14500.95 56.64 0.00 0.00 8786.55 1050.07 14239.19 00:21:39.850 [2024-11-20T22:41:40.584Z] =================================================================================================================== 00:21:39.850 [2024-11-20T22:41:40.584Z] Total : 14500.95 56.64 0.00 0.00 8786.55 1050.07 14239.19 00:21:39.850 22:41:40 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:39.850 22:41:40 -- host/failover.sh@95 -- # grep -q NVMe0 00:21:39.850 22:41:40 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:40.109 22:41:40 -- host/failover.sh@99 -- # grep -q NVMe0 00:21:40.109 22:41:40 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:40.367 22:41:40 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:40.626 22:41:41 -- host/failover.sh@101 -- # sleep 3 00:21:43.912 22:41:44 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:43.912 22:41:44 -- host/failover.sh@103 -- # grep -q NVMe0 00:21:43.912 22:41:44 -- host/failover.sh@108 -- # killprocess 95638 00:21:43.912 22:41:44 -- common/autotest_common.sh@936 -- # '[' -z 95638 ']' 00:21:43.912 22:41:44 -- common/autotest_common.sh@940 -- # kill -0 95638 00:21:43.912 22:41:44 -- common/autotest_common.sh@941 -- # uname 00:21:43.912 22:41:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:43.912 22:41:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95638 00:21:43.912 22:41:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:43.912 22:41:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:43.912 killing process with pid 95638 00:21:43.912 22:41:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95638' 00:21:43.912 22:41:44 -- common/autotest_common.sh@955 -- # kill 95638 00:21:43.912 22:41:44 -- common/autotest_common.sh@960 -- # wait 95638 00:21:44.170 22:41:44 -- host/failover.sh@110 -- # sync 00:21:44.170 22:41:44 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:44.429 22:41:44 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:44.429 22:41:44 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:44.429 22:41:44 -- host/failover.sh@116 -- # nvmftestfini 00:21:44.429 22:41:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:44.429 22:41:44 -- nvmf/common.sh@116 -- # sync 00:21:44.429 22:41:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:44.429 22:41:44 -- nvmf/common.sh@119 -- # set +e 00:21:44.429 22:41:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:44.429 22:41:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:44.429 rmmod nvme_tcp 00:21:44.429 rmmod nvme_fabrics 00:21:44.429 rmmod nvme_keyring 00:21:44.429 22:41:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:44.429 22:41:45 -- nvmf/common.sh@123 -- # set -e 00:21:44.429 22:41:45 -- nvmf/common.sh@124 -- # return 0 00:21:44.429 22:41:45 -- nvmf/common.sh@477 -- # '[' -n 95275 ']' 00:21:44.429 22:41:45 -- nvmf/common.sh@478 -- # killprocess 95275 00:21:44.429 22:41:45 -- common/autotest_common.sh@936 -- # '[' -z 95275 ']' 00:21:44.429 22:41:45 -- common/autotest_common.sh@940 -- # kill -0 95275 00:21:44.429 22:41:45 -- common/autotest_common.sh@941 -- # uname 00:21:44.429 22:41:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:44.429 22:41:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95275 00:21:44.429 22:41:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:44.429 22:41:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:44.429 killing process with pid 95275 00:21:44.429 22:41:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95275' 00:21:44.429 22:41:45 -- common/autotest_common.sh@955 -- # kill 95275 00:21:44.429 22:41:45 -- common/autotest_common.sh@960 -- # wait 95275 00:21:44.687 22:41:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:44.687 22:41:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:44.687 22:41:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:44.687 22:41:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:44.687 22:41:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:44.687 22:41:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.687 22:41:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.687 22:41:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.687 22:41:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:44.687 ************************************ 00:21:44.687 END TEST nvmf_failover 00:21:44.687 ************************************ 00:21:44.687 00:21:44.687 real 0m32.669s 00:21:44.687 user 2m6.154s 00:21:44.687 sys 0m5.038s 00:21:44.687 22:41:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:44.687 22:41:45 -- common/autotest_common.sh@10 -- # set +x 00:21:44.687 22:41:45 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:44.687 22:41:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:44.687 22:41:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:44.687 22:41:45 -- common/autotest_common.sh@10 -- # set +x 00:21:44.687 ************************************ 00:21:44.687 START TEST nvmf_discovery 00:21:44.687 ************************************ 00:21:44.687 22:41:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:44.687 * Looking for test storage... 00:21:44.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:44.947 22:41:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:44.947 22:41:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:44.947 22:41:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:44.947 22:41:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:44.947 22:41:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:44.947 22:41:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:44.947 22:41:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:44.947 22:41:45 -- scripts/common.sh@335 -- # IFS=.-: 00:21:44.947 22:41:45 -- scripts/common.sh@335 -- # read -ra ver1 00:21:44.947 22:41:45 -- scripts/common.sh@336 -- # IFS=.-: 00:21:44.947 22:41:45 -- scripts/common.sh@336 -- # read -ra ver2 00:21:44.947 22:41:45 -- scripts/common.sh@337 -- # local 'op=<' 00:21:44.947 22:41:45 -- scripts/common.sh@339 -- # ver1_l=2 00:21:44.947 22:41:45 -- scripts/common.sh@340 -- # ver2_l=1 00:21:44.947 22:41:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:44.947 22:41:45 -- scripts/common.sh@343 -- # case "$op" in 00:21:44.947 22:41:45 -- scripts/common.sh@344 -- # : 1 00:21:44.947 22:41:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:44.947 22:41:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.947 22:41:45 -- scripts/common.sh@364 -- # decimal 1 00:21:44.947 22:41:45 -- scripts/common.sh@352 -- # local d=1 00:21:44.947 22:41:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:44.947 22:41:45 -- scripts/common.sh@354 -- # echo 1 00:21:44.947 22:41:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:44.947 22:41:45 -- scripts/common.sh@365 -- # decimal 2 00:21:44.947 22:41:45 -- scripts/common.sh@352 -- # local d=2 00:21:44.947 22:41:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:44.947 22:41:45 -- scripts/common.sh@354 -- # echo 2 00:21:44.947 22:41:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:44.947 22:41:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:44.947 22:41:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:44.947 22:41:45 -- scripts/common.sh@367 -- # return 0 00:21:44.947 22:41:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:44.947 22:41:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:44.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.947 --rc genhtml_branch_coverage=1 00:21:44.947 --rc genhtml_function_coverage=1 00:21:44.947 --rc genhtml_legend=1 00:21:44.947 --rc geninfo_all_blocks=1 00:21:44.947 --rc geninfo_unexecuted_blocks=1 00:21:44.947 00:21:44.947 ' 00:21:44.947 22:41:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:44.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.947 --rc genhtml_branch_coverage=1 00:21:44.947 --rc genhtml_function_coverage=1 00:21:44.947 --rc genhtml_legend=1 00:21:44.947 --rc geninfo_all_blocks=1 00:21:44.947 --rc geninfo_unexecuted_blocks=1 00:21:44.947 00:21:44.947 ' 00:21:44.947 22:41:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:44.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.947 --rc genhtml_branch_coverage=1 00:21:44.947 --rc genhtml_function_coverage=1 00:21:44.947 --rc genhtml_legend=1 00:21:44.947 --rc geninfo_all_blocks=1 00:21:44.947 --rc geninfo_unexecuted_blocks=1 00:21:44.947 00:21:44.947 ' 00:21:44.947 22:41:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:44.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.947 --rc genhtml_branch_coverage=1 00:21:44.947 --rc genhtml_function_coverage=1 00:21:44.947 --rc genhtml_legend=1 00:21:44.947 --rc geninfo_all_blocks=1 00:21:44.947 --rc geninfo_unexecuted_blocks=1 00:21:44.947 00:21:44.947 ' 00:21:44.947 22:41:45 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:44.947 22:41:45 -- nvmf/common.sh@7 -- # uname -s 00:21:44.947 22:41:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.947 22:41:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.947 22:41:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.947 22:41:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.947 22:41:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.947 22:41:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.947 22:41:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.947 22:41:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.947 22:41:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.947 22:41:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.947 22:41:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:21:44.947 22:41:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:21:44.947 22:41:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.947 22:41:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.947 22:41:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:44.947 22:41:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:44.947 22:41:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.947 22:41:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.947 22:41:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.947 22:41:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.947 22:41:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.947 22:41:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.947 22:41:45 -- paths/export.sh@5 -- # export PATH 00:21:44.947 22:41:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.947 22:41:45 -- nvmf/common.sh@46 -- # : 0 00:21:44.947 22:41:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:44.947 22:41:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:44.947 22:41:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:44.947 22:41:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.947 22:41:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.947 22:41:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:44.947 22:41:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:44.947 22:41:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:44.947 22:41:45 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:44.947 22:41:45 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:44.947 22:41:45 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:44.947 22:41:45 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:44.947 22:41:45 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:44.947 22:41:45 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:44.947 22:41:45 -- host/discovery.sh@25 -- # nvmftestinit 00:21:44.947 22:41:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:44.947 22:41:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.947 22:41:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:44.947 22:41:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:44.947 22:41:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:44.947 22:41:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.947 22:41:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.947 22:41:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.947 22:41:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:44.947 22:41:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:44.947 22:41:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:44.947 22:41:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:44.947 22:41:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:44.947 22:41:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:44.947 22:41:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.947 22:41:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:44.947 22:41:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:44.947 22:41:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:44.947 22:41:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:44.947 22:41:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:44.947 22:41:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:44.947 22:41:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.947 22:41:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:44.947 22:41:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:44.947 22:41:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:44.948 22:41:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:44.948 22:41:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:44.948 22:41:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:44.948 Cannot find device "nvmf_tgt_br" 00:21:44.948 22:41:45 -- nvmf/common.sh@154 -- # true 00:21:44.948 22:41:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:44.948 Cannot find device "nvmf_tgt_br2" 00:21:44.948 22:41:45 -- nvmf/common.sh@155 -- # true 00:21:44.948 22:41:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:44.948 22:41:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:44.948 Cannot find device "nvmf_tgt_br" 00:21:44.948 22:41:45 -- nvmf/common.sh@157 -- # true 00:21:44.948 22:41:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:44.948 Cannot find device "nvmf_tgt_br2" 00:21:44.948 22:41:45 -- nvmf/common.sh@158 -- # true 00:21:44.948 22:41:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:45.206 22:41:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:45.206 22:41:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:45.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:45.206 22:41:45 -- nvmf/common.sh@161 -- # true 00:21:45.206 22:41:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:45.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:45.206 22:41:45 -- nvmf/common.sh@162 -- # true 00:21:45.206 22:41:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:45.206 22:41:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:45.206 22:41:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:45.206 22:41:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:45.206 22:41:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:45.206 22:41:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:45.206 22:41:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:45.206 22:41:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:45.206 22:41:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:45.206 22:41:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:45.206 22:41:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:45.206 22:41:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:45.206 22:41:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:45.206 22:41:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:45.206 22:41:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:45.206 22:41:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:45.206 22:41:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:45.206 22:41:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:45.206 22:41:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:45.206 22:41:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:45.206 22:41:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:45.206 22:41:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:45.206 22:41:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:45.206 22:41:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:45.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:21:45.206 00:21:45.206 --- 10.0.0.2 ping statistics --- 00:21:45.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.206 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:21:45.206 22:41:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:45.206 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:45.206 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:21:45.206 00:21:45.206 --- 10.0.0.3 ping statistics --- 00:21:45.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.206 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:21:45.206 22:41:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:45.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:21:45.206 00:21:45.206 --- 10.0.0.1 ping statistics --- 00:21:45.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.206 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:21:45.206 22:41:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.206 22:41:45 -- nvmf/common.sh@421 -- # return 0 00:21:45.206 22:41:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:45.206 22:41:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:45.206 22:41:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:45.206 22:41:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:45.206 22:41:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:45.206 22:41:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:45.206 22:41:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:45.207 22:41:45 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:45.207 22:41:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:45.207 22:41:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:45.207 22:41:45 -- common/autotest_common.sh@10 -- # set +x 00:21:45.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.207 22:41:45 -- nvmf/common.sh@469 -- # nvmfpid=96091 00:21:45.207 22:41:45 -- nvmf/common.sh@470 -- # waitforlisten 96091 00:21:45.207 22:41:45 -- common/autotest_common.sh@829 -- # '[' -z 96091 ']' 00:21:45.207 22:41:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:45.207 22:41:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.207 22:41:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:45.207 22:41:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.207 22:41:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:45.207 22:41:45 -- common/autotest_common.sh@10 -- # set +x 00:21:45.466 [2024-11-20 22:41:45.985339] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:45.466 [2024-11-20 22:41:45.985425] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.466 [2024-11-20 22:41:46.124091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.466 [2024-11-20 22:41:46.184814] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:45.466 [2024-11-20 22:41:46.185238] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.466 [2024-11-20 22:41:46.185262] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.466 [2024-11-20 22:41:46.185271] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.466 [2024-11-20 22:41:46.185325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.402 22:41:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:46.402 22:41:46 -- common/autotest_common.sh@862 -- # return 0 00:21:46.402 22:41:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:46.402 22:41:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:46.402 22:41:46 -- common/autotest_common.sh@10 -- # set +x 00:21:46.402 22:41:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.402 22:41:47 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:46.402 22:41:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.402 22:41:47 -- common/autotest_common.sh@10 -- # set +x 00:21:46.402 [2024-11-20 22:41:47.046029] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.402 22:41:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.402 22:41:47 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:46.402 22:41:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.402 22:41:47 -- common/autotest_common.sh@10 -- # set +x 00:21:46.402 [2024-11-20 22:41:47.054176] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:46.402 22:41:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.402 22:41:47 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:46.402 22:41:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.402 22:41:47 -- common/autotest_common.sh@10 -- # set +x 00:21:46.402 null0 00:21:46.402 22:41:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.402 22:41:47 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:46.402 22:41:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.402 22:41:47 -- common/autotest_common.sh@10 -- # set +x 00:21:46.402 null1 00:21:46.402 22:41:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.402 22:41:47 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:46.402 22:41:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.402 22:41:47 -- common/autotest_common.sh@10 -- # set +x 00:21:46.402 22:41:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.402 22:41:47 -- host/discovery.sh@45 -- # hostpid=96141 00:21:46.402 22:41:47 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:46.402 22:41:47 -- host/discovery.sh@46 -- # waitforlisten 96141 /tmp/host.sock 00:21:46.402 22:41:47 -- common/autotest_common.sh@829 -- # '[' -z 96141 ']' 00:21:46.402 22:41:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:21:46.402 22:41:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:46.402 22:41:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:46.402 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:46.402 22:41:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:46.402 22:41:47 -- common/autotest_common.sh@10 -- # set +x 00:21:46.660 [2024-11-20 22:41:47.140174] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:46.660 [2024-11-20 22:41:47.140577] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96141 ] 00:21:46.660 [2024-11-20 22:41:47.275331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.660 [2024-11-20 22:41:47.344144] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:46.660 [2024-11-20 22:41:47.344329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.594 22:41:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:47.594 22:41:48 -- common/autotest_common.sh@862 -- # return 0 00:21:47.594 22:41:48 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:47.594 22:41:48 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:47.594 22:41:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.594 22:41:48 -- common/autotest_common.sh@10 -- # set +x 00:21:47.594 22:41:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.594 22:41:48 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:47.594 22:41:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.594 22:41:48 -- common/autotest_common.sh@10 -- # set +x 00:21:47.594 22:41:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.594 22:41:48 -- host/discovery.sh@72 -- # notify_id=0 00:21:47.594 22:41:48 -- host/discovery.sh@78 -- # get_subsystem_names 00:21:47.594 22:41:48 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:47.594 22:41:48 -- host/discovery.sh@59 -- # sort 00:21:47.594 22:41:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.594 22:41:48 -- common/autotest_common.sh@10 -- # set +x 00:21:47.594 22:41:48 -- host/discovery.sh@59 -- # xargs 00:21:47.594 22:41:48 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:47.594 22:41:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.594 22:41:48 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:21:47.594 22:41:48 -- host/discovery.sh@79 -- # get_bdev_list 00:21:47.594 22:41:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.595 22:41:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:47.595 22:41:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.595 22:41:48 -- host/discovery.sh@55 -- # sort 00:21:47.595 22:41:48 -- common/autotest_common.sh@10 -- # set +x 00:21:47.595 22:41:48 -- host/discovery.sh@55 -- # xargs 00:21:47.595 22:41:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.595 22:41:48 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:21:47.595 22:41:48 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:47.595 22:41:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.595 22:41:48 -- common/autotest_common.sh@10 -- # set +x 00:21:47.595 22:41:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.595 22:41:48 -- host/discovery.sh@82 -- # get_subsystem_names 00:21:47.595 22:41:48 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:47.595 22:41:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.595 22:41:48 -- common/autotest_common.sh@10 -- # set +x 00:21:47.595 22:41:48 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:47.595 22:41:48 -- host/discovery.sh@59 -- # sort 00:21:47.595 22:41:48 -- host/discovery.sh@59 -- # xargs 00:21:47.595 22:41:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.595 22:41:48 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:21:47.595 22:41:48 -- host/discovery.sh@83 -- # get_bdev_list 00:21:47.595 22:41:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.595 22:41:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:47.595 22:41:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.595 22:41:48 -- common/autotest_common.sh@10 -- # set +x 00:21:47.595 22:41:48 -- host/discovery.sh@55 -- # sort 00:21:47.595 22:41:48 -- host/discovery.sh@55 -- # xargs 00:21:47.595 22:41:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.859 22:41:48 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:47.859 22:41:48 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:47.859 22:41:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.859 22:41:48 -- common/autotest_common.sh@10 -- # set +x 00:21:47.859 22:41:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.859 22:41:48 -- host/discovery.sh@86 -- # get_subsystem_names 00:21:47.859 22:41:48 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:47.859 22:41:48 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:47.859 22:41:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.859 22:41:48 -- host/discovery.sh@59 -- # sort 00:21:47.859 22:41:48 -- common/autotest_common.sh@10 -- # set +x 00:21:47.859 22:41:48 -- host/discovery.sh@59 -- # xargs 00:21:47.859 22:41:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.859 22:41:48 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:21:47.859 22:41:48 -- host/discovery.sh@87 -- # get_bdev_list 00:21:47.859 22:41:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.859 22:41:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:47.859 22:41:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.859 22:41:48 -- host/discovery.sh@55 -- # sort 00:21:47.859 22:41:48 -- common/autotest_common.sh@10 -- # set +x 00:21:47.859 22:41:48 -- host/discovery.sh@55 -- # xargs 00:21:47.859 22:41:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.859 22:41:48 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:47.859 22:41:48 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:47.859 22:41:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.859 22:41:48 -- common/autotest_common.sh@10 -- # set +x 00:21:47.859 [2024-11-20 22:41:48.458595] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.859 22:41:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.859 22:41:48 -- host/discovery.sh@92 -- # get_subsystem_names 00:21:47.859 22:41:48 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:47.859 22:41:48 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:47.859 22:41:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.859 22:41:48 -- host/discovery.sh@59 -- # sort 00:21:47.859 22:41:48 -- common/autotest_common.sh@10 -- # set +x 00:21:47.859 22:41:48 -- host/discovery.sh@59 -- # xargs 00:21:47.859 22:41:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.859 22:41:48 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:47.859 22:41:48 -- host/discovery.sh@93 -- # get_bdev_list 00:21:47.859 22:41:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:47.859 22:41:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.859 22:41:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.859 22:41:48 -- common/autotest_common.sh@10 -- # set +x 00:21:47.859 22:41:48 -- host/discovery.sh@55 -- # sort 00:21:47.859 22:41:48 -- host/discovery.sh@55 -- # xargs 00:21:47.859 22:41:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.859 22:41:48 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:21:47.859 22:41:48 -- host/discovery.sh@94 -- # get_notification_count 00:21:47.859 22:41:48 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:47.859 22:41:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.859 22:41:48 -- common/autotest_common.sh@10 -- # set +x 00:21:47.859 22:41:48 -- host/discovery.sh@74 -- # jq '. | length' 00:21:47.859 22:41:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.146 22:41:48 -- host/discovery.sh@74 -- # notification_count=0 00:21:48.146 22:41:48 -- host/discovery.sh@75 -- # notify_id=0 00:21:48.146 22:41:48 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:21:48.146 22:41:48 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:48.146 22:41:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.146 22:41:48 -- common/autotest_common.sh@10 -- # set +x 00:21:48.146 22:41:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.146 22:41:48 -- host/discovery.sh@100 -- # sleep 1 00:21:48.414 [2024-11-20 22:41:49.099538] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:48.414 [2024-11-20 22:41:49.099568] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:48.414 [2024-11-20 22:41:49.099585] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:48.679 [2024-11-20 22:41:49.186274] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:48.679 [2024-11-20 22:41:49.242033] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:48.679 [2024-11-20 22:41:49.242059] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:48.941 22:41:49 -- host/discovery.sh@101 -- # get_subsystem_names 00:21:48.941 22:41:49 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:48.941 22:41:49 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:48.941 22:41:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.941 22:41:49 -- common/autotest_common.sh@10 -- # set +x 00:21:48.941 22:41:49 -- host/discovery.sh@59 -- # sort 00:21:48.941 22:41:49 -- host/discovery.sh@59 -- # xargs 00:21:48.941 22:41:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.200 22:41:49 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.200 22:41:49 -- host/discovery.sh@102 -- # get_bdev_list 00:21:49.200 22:41:49 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:49.200 22:41:49 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.200 22:41:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.200 22:41:49 -- common/autotest_common.sh@10 -- # set +x 00:21:49.200 22:41:49 -- host/discovery.sh@55 -- # sort 00:21:49.200 22:41:49 -- host/discovery.sh@55 -- # xargs 00:21:49.200 22:41:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.200 22:41:49 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:49.200 22:41:49 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:21:49.200 22:41:49 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:49.200 22:41:49 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:49.200 22:41:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.200 22:41:49 -- common/autotest_common.sh@10 -- # set +x 00:21:49.200 22:41:49 -- host/discovery.sh@63 -- # xargs 00:21:49.200 22:41:49 -- host/discovery.sh@63 -- # sort -n 00:21:49.200 22:41:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.200 22:41:49 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:21:49.200 22:41:49 -- host/discovery.sh@104 -- # get_notification_count 00:21:49.200 22:41:49 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:49.200 22:41:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.200 22:41:49 -- common/autotest_common.sh@10 -- # set +x 00:21:49.200 22:41:49 -- host/discovery.sh@74 -- # jq '. | length' 00:21:49.200 22:41:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.200 22:41:49 -- host/discovery.sh@74 -- # notification_count=1 00:21:49.200 22:41:49 -- host/discovery.sh@75 -- # notify_id=1 00:21:49.200 22:41:49 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:21:49.200 22:41:49 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:49.200 22:41:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.200 22:41:49 -- common/autotest_common.sh@10 -- # set +x 00:21:49.200 22:41:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.200 22:41:49 -- host/discovery.sh@109 -- # sleep 1 00:21:50.136 22:41:50 -- host/discovery.sh@110 -- # get_bdev_list 00:21:50.136 22:41:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:50.136 22:41:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:50.136 22:41:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.136 22:41:50 -- common/autotest_common.sh@10 -- # set +x 00:21:50.136 22:41:50 -- host/discovery.sh@55 -- # sort 00:21:50.136 22:41:50 -- host/discovery.sh@55 -- # xargs 00:21:50.395 22:41:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.395 22:41:50 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:50.395 22:41:50 -- host/discovery.sh@111 -- # get_notification_count 00:21:50.395 22:41:50 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:50.395 22:41:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.395 22:41:50 -- host/discovery.sh@74 -- # jq '. | length' 00:21:50.395 22:41:50 -- common/autotest_common.sh@10 -- # set +x 00:21:50.395 22:41:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.395 22:41:50 -- host/discovery.sh@74 -- # notification_count=1 00:21:50.395 22:41:50 -- host/discovery.sh@75 -- # notify_id=2 00:21:50.395 22:41:50 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:21:50.395 22:41:50 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:50.395 22:41:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.395 22:41:50 -- common/autotest_common.sh@10 -- # set +x 00:21:50.395 [2024-11-20 22:41:50.967643] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:50.395 [2024-11-20 22:41:50.968836] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:50.395 [2024-11-20 22:41:50.968867] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:50.395 22:41:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.395 22:41:50 -- host/discovery.sh@117 -- # sleep 1 00:21:50.395 [2024-11-20 22:41:51.054880] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:50.395 [2024-11-20 22:41:51.112086] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:50.395 [2024-11-20 22:41:51.112108] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:50.395 [2024-11-20 22:41:51.112114] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:51.330 22:41:51 -- host/discovery.sh@118 -- # get_subsystem_names 00:21:51.330 22:41:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:51.330 22:41:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:51.330 22:41:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.330 22:41:51 -- common/autotest_common.sh@10 -- # set +x 00:21:51.330 22:41:51 -- host/discovery.sh@59 -- # xargs 00:21:51.330 22:41:51 -- host/discovery.sh@59 -- # sort 00:21:51.330 22:41:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.330 22:41:52 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.330 22:41:52 -- host/discovery.sh@119 -- # get_bdev_list 00:21:51.330 22:41:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:51.330 22:41:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.330 22:41:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:51.330 22:41:52 -- common/autotest_common.sh@10 -- # set +x 00:21:51.330 22:41:52 -- host/discovery.sh@55 -- # sort 00:21:51.330 22:41:52 -- host/discovery.sh@55 -- # xargs 00:21:51.590 22:41:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.590 22:41:52 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:51.590 22:41:52 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:21:51.590 22:41:52 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:51.590 22:41:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.590 22:41:52 -- common/autotest_common.sh@10 -- # set +x 00:21:51.590 22:41:52 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:51.590 22:41:52 -- host/discovery.sh@63 -- # sort -n 00:21:51.590 22:41:52 -- host/discovery.sh@63 -- # xargs 00:21:51.590 22:41:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.590 22:41:52 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:51.590 22:41:52 -- host/discovery.sh@121 -- # get_notification_count 00:21:51.590 22:41:52 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:51.590 22:41:52 -- host/discovery.sh@74 -- # jq '. | length' 00:21:51.590 22:41:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.590 22:41:52 -- common/autotest_common.sh@10 -- # set +x 00:21:51.590 22:41:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.590 22:41:52 -- host/discovery.sh@74 -- # notification_count=0 00:21:51.590 22:41:52 -- host/discovery.sh@75 -- # notify_id=2 00:21:51.590 22:41:52 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:21:51.590 22:41:52 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:51.590 22:41:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.590 22:41:52 -- common/autotest_common.sh@10 -- # set +x 00:21:51.590 [2024-11-20 22:41:52.193100] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:51.590 [2024-11-20 22:41:52.193126] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:51.590 22:41:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.590 22:41:52 -- host/discovery.sh@127 -- # sleep 1 00:21:51.590 [2024-11-20 22:41:52.199832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.590 [2024-11-20 22:41:52.199867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.590 [2024-11-20 22:41:52.199880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.590 [2024-11-20 22:41:52.199888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.590 [2024-11-20 22:41:52.199897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.590 [2024-11-20 22:41:52.199904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.590 [2024-11-20 22:41:52.199912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.590 [2024-11-20 22:41:52.199920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.590 [2024-11-20 22:41:52.199927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e6cf0 is same with the state(5) to be set 00:21:51.590 [2024-11-20 22:41:52.209781] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e6cf0 (9): Bad file descriptor 00:21:51.590 [2024-11-20 22:41:52.219799] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:51.590 [2024-11-20 22:41:52.219886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.590 [2024-11-20 22:41:52.219927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.590 [2024-11-20 22:41:52.219942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6e6cf0 with addr=10.0.0.2, port=4420 00:21:51.590 [2024-11-20 22:41:52.219952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e6cf0 is same with the state(5) to be set 00:21:51.590 [2024-11-20 22:41:52.219966] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e6cf0 (9): Bad file descriptor 00:21:51.590 [2024-11-20 22:41:52.219979] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:51.590 [2024-11-20 22:41:52.219987] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:51.590 [2024-11-20 22:41:52.219995] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:51.590 [2024-11-20 22:41:52.220009] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.590 [2024-11-20 22:41:52.229847] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:51.590 [2024-11-20 22:41:52.229918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.590 [2024-11-20 22:41:52.229957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.590 [2024-11-20 22:41:52.229971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6e6cf0 with addr=10.0.0.2, port=4420 00:21:51.590 [2024-11-20 22:41:52.229980] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e6cf0 is same with the state(5) to be set 00:21:51.590 [2024-11-20 22:41:52.229993] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e6cf0 (9): Bad file descriptor 00:21:51.590 [2024-11-20 22:41:52.230005] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:51.590 [2024-11-20 22:41:52.230012] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:51.590 [2024-11-20 22:41:52.230020] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:51.591 [2024-11-20 22:41:52.230032] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.591 [2024-11-20 22:41:52.239894] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:51.591 [2024-11-20 22:41:52.240123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.591 [2024-11-20 22:41:52.240168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.591 [2024-11-20 22:41:52.240183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6e6cf0 with addr=10.0.0.2, port=4420 00:21:51.591 [2024-11-20 22:41:52.240193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e6cf0 is same with the state(5) to be set 00:21:51.591 [2024-11-20 22:41:52.240208] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e6cf0 (9): Bad file descriptor 00:21:51.591 [2024-11-20 22:41:52.240234] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:51.591 [2024-11-20 22:41:52.240243] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:51.591 [2024-11-20 22:41:52.240252] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:51.591 [2024-11-20 22:41:52.240266] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.591 [2024-11-20 22:41:52.250084] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:51.591 [2024-11-20 22:41:52.250154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.591 [2024-11-20 22:41:52.250193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.591 [2024-11-20 22:41:52.250207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6e6cf0 with addr=10.0.0.2, port=4420 00:21:51.591 [2024-11-20 22:41:52.250216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e6cf0 is same with the state(5) to be set 00:21:51.591 [2024-11-20 22:41:52.250229] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e6cf0 (9): Bad file descriptor 00:21:51.591 [2024-11-20 22:41:52.250250] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:51.591 [2024-11-20 22:41:52.250259] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:51.591 [2024-11-20 22:41:52.250266] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:51.591 [2024-11-20 22:41:52.250291] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.591 [2024-11-20 22:41:52.260127] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:51.591 [2024-11-20 22:41:52.260196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.591 [2024-11-20 22:41:52.260233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.591 [2024-11-20 22:41:52.260246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6e6cf0 with addr=10.0.0.2, port=4420 00:21:51.591 [2024-11-20 22:41:52.260256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e6cf0 is same with the state(5) to be set 00:21:51.591 [2024-11-20 22:41:52.260269] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e6cf0 (9): Bad file descriptor 00:21:51.591 [2024-11-20 22:41:52.260305] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:51.591 [2024-11-20 22:41:52.260315] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:51.591 [2024-11-20 22:41:52.260323] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:51.591 [2024-11-20 22:41:52.260336] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.591 [2024-11-20 22:41:52.270170] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:51.591 [2024-11-20 22:41:52.270238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.591 [2024-11-20 22:41:52.270289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.591 [2024-11-20 22:41:52.270305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6e6cf0 with addr=10.0.0.2, port=4420 00:21:51.591 [2024-11-20 22:41:52.270314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e6cf0 is same with the state(5) to be set 00:21:51.591 [2024-11-20 22:41:52.270328] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e6cf0 (9): Bad file descriptor 00:21:51.591 [2024-11-20 22:41:52.270349] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:51.591 [2024-11-20 22:41:52.270357] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:51.591 [2024-11-20 22:41:52.270365] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:51.591 [2024-11-20 22:41:52.270395] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.591 [2024-11-20 22:41:52.279352] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:51.591 [2024-11-20 22:41:52.279376] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:52.528 22:41:53 -- host/discovery.sh@128 -- # get_subsystem_names 00:21:52.528 22:41:53 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:52.528 22:41:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.528 22:41:53 -- common/autotest_common.sh@10 -- # set +x 00:21:52.528 22:41:53 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:52.528 22:41:53 -- host/discovery.sh@59 -- # sort 00:21:52.528 22:41:53 -- host/discovery.sh@59 -- # xargs 00:21:52.528 22:41:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.528 22:41:53 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.528 22:41:53 -- host/discovery.sh@129 -- # get_bdev_list 00:21:52.528 22:41:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.528 22:41:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:52.528 22:41:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.528 22:41:53 -- common/autotest_common.sh@10 -- # set +x 00:21:52.528 22:41:53 -- host/discovery.sh@55 -- # sort 00:21:52.528 22:41:53 -- host/discovery.sh@55 -- # xargs 00:21:52.787 22:41:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.787 22:41:53 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:52.787 22:41:53 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:21:52.787 22:41:53 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:52.787 22:41:53 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:52.787 22:41:53 -- host/discovery.sh@63 -- # sort -n 00:21:52.787 22:41:53 -- host/discovery.sh@63 -- # xargs 00:21:52.787 22:41:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.787 22:41:53 -- common/autotest_common.sh@10 -- # set +x 00:21:52.787 22:41:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.787 22:41:53 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:21:52.787 22:41:53 -- host/discovery.sh@131 -- # get_notification_count 00:21:52.787 22:41:53 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:52.787 22:41:53 -- host/discovery.sh@74 -- # jq '. | length' 00:21:52.787 22:41:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.787 22:41:53 -- common/autotest_common.sh@10 -- # set +x 00:21:52.787 22:41:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.787 22:41:53 -- host/discovery.sh@74 -- # notification_count=0 00:21:52.787 22:41:53 -- host/discovery.sh@75 -- # notify_id=2 00:21:52.787 22:41:53 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:21:52.787 22:41:53 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:52.787 22:41:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.787 22:41:53 -- common/autotest_common.sh@10 -- # set +x 00:21:52.787 22:41:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.787 22:41:53 -- host/discovery.sh@135 -- # sleep 1 00:21:53.723 22:41:54 -- host/discovery.sh@136 -- # get_subsystem_names 00:21:53.723 22:41:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:53.723 22:41:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:53.723 22:41:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.723 22:41:54 -- common/autotest_common.sh@10 -- # set +x 00:21:53.723 22:41:54 -- host/discovery.sh@59 -- # sort 00:21:53.723 22:41:54 -- host/discovery.sh@59 -- # xargs 00:21:53.723 22:41:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.982 22:41:54 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:21:53.982 22:41:54 -- host/discovery.sh@137 -- # get_bdev_list 00:21:53.982 22:41:54 -- host/discovery.sh@55 -- # sort 00:21:53.982 22:41:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:53.982 22:41:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:53.982 22:41:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.982 22:41:54 -- host/discovery.sh@55 -- # xargs 00:21:53.982 22:41:54 -- common/autotest_common.sh@10 -- # set +x 00:21:53.982 22:41:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.982 22:41:54 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:21:53.982 22:41:54 -- host/discovery.sh@138 -- # get_notification_count 00:21:53.982 22:41:54 -- host/discovery.sh@74 -- # jq '. | length' 00:21:53.982 22:41:54 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:53.982 22:41:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.982 22:41:54 -- common/autotest_common.sh@10 -- # set +x 00:21:53.982 22:41:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.982 22:41:54 -- host/discovery.sh@74 -- # notification_count=2 00:21:53.982 22:41:54 -- host/discovery.sh@75 -- # notify_id=4 00:21:53.982 22:41:54 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:21:53.982 22:41:54 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:53.982 22:41:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.982 22:41:54 -- common/autotest_common.sh@10 -- # set +x 00:21:54.918 [2024-11-20 22:41:55.580211] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:54.918 [2024-11-20 22:41:55.580230] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:54.918 [2024-11-20 22:41:55.580245] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:55.177 [2024-11-20 22:41:55.667300] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:55.178 [2024-11-20 22:41:55.726140] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:55.178 [2024-11-20 22:41:55.726171] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:55.178 22:41:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.178 22:41:55 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:55.178 22:41:55 -- common/autotest_common.sh@650 -- # local es=0 00:21:55.178 22:41:55 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:55.178 22:41:55 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:55.178 22:41:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.178 22:41:55 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:55.178 22:41:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.178 22:41:55 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:55.178 22:41:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.178 22:41:55 -- common/autotest_common.sh@10 -- # set +x 00:21:55.178 2024/11/20 22:41:55 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:55.178 request: 00:21:55.178 { 00:21:55.178 "method": "bdev_nvme_start_discovery", 00:21:55.178 "params": { 00:21:55.178 "name": "nvme", 00:21:55.178 "trtype": "tcp", 00:21:55.178 "traddr": "10.0.0.2", 00:21:55.178 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:55.178 "adrfam": "ipv4", 00:21:55.178 "trsvcid": "8009", 00:21:55.178 "wait_for_attach": true 00:21:55.178 } 00:21:55.178 } 00:21:55.178 Got JSON-RPC error response 00:21:55.178 GoRPCClient: error on JSON-RPC call 00:21:55.178 22:41:55 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:55.178 22:41:55 -- common/autotest_common.sh@653 -- # es=1 00:21:55.178 22:41:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:55.178 22:41:55 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:55.178 22:41:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:55.178 22:41:55 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:21:55.178 22:41:55 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:55.178 22:41:55 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:55.178 22:41:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.178 22:41:55 -- common/autotest_common.sh@10 -- # set +x 00:21:55.178 22:41:55 -- host/discovery.sh@67 -- # xargs 00:21:55.178 22:41:55 -- host/discovery.sh@67 -- # sort 00:21:55.178 22:41:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.178 22:41:55 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:21:55.178 22:41:55 -- host/discovery.sh@147 -- # get_bdev_list 00:21:55.178 22:41:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:55.178 22:41:55 -- host/discovery.sh@55 -- # sort 00:21:55.178 22:41:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:55.178 22:41:55 -- host/discovery.sh@55 -- # xargs 00:21:55.178 22:41:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.178 22:41:55 -- common/autotest_common.sh@10 -- # set +x 00:21:55.178 22:41:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.178 22:41:55 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:55.178 22:41:55 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:55.178 22:41:55 -- common/autotest_common.sh@650 -- # local es=0 00:21:55.178 22:41:55 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:55.178 22:41:55 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:55.178 22:41:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.178 22:41:55 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:55.178 22:41:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.178 22:41:55 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:55.178 22:41:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.178 22:41:55 -- common/autotest_common.sh@10 -- # set +x 00:21:55.178 2024/11/20 22:41:55 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:55.178 request: 00:21:55.178 { 00:21:55.178 "method": "bdev_nvme_start_discovery", 00:21:55.178 "params": { 00:21:55.178 "name": "nvme_second", 00:21:55.178 "trtype": "tcp", 00:21:55.178 "traddr": "10.0.0.2", 00:21:55.178 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:55.178 "adrfam": "ipv4", 00:21:55.178 "trsvcid": "8009", 00:21:55.178 "wait_for_attach": true 00:21:55.178 } 00:21:55.178 } 00:21:55.178 Got JSON-RPC error response 00:21:55.178 GoRPCClient: error on JSON-RPC call 00:21:55.178 22:41:55 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:55.178 22:41:55 -- common/autotest_common.sh@653 -- # es=1 00:21:55.178 22:41:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:55.178 22:41:55 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:55.178 22:41:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:55.178 22:41:55 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:21:55.178 22:41:55 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:55.178 22:41:55 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:55.178 22:41:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.178 22:41:55 -- host/discovery.sh@67 -- # xargs 00:21:55.178 22:41:55 -- common/autotest_common.sh@10 -- # set +x 00:21:55.178 22:41:55 -- host/discovery.sh@67 -- # sort 00:21:55.178 22:41:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.437 22:41:55 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:21:55.437 22:41:55 -- host/discovery.sh@153 -- # get_bdev_list 00:21:55.437 22:41:55 -- host/discovery.sh@55 -- # sort 00:21:55.437 22:41:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:55.437 22:41:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:55.437 22:41:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.437 22:41:55 -- host/discovery.sh@55 -- # xargs 00:21:55.437 22:41:55 -- common/autotest_common.sh@10 -- # set +x 00:21:55.437 22:41:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.437 22:41:55 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:55.437 22:41:55 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:55.437 22:41:55 -- common/autotest_common.sh@650 -- # local es=0 00:21:55.437 22:41:55 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:55.437 22:41:55 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:55.437 22:41:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.437 22:41:55 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:55.437 22:41:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.437 22:41:55 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:55.437 22:41:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.437 22:41:55 -- common/autotest_common.sh@10 -- # set +x 00:21:56.373 [2024-11-20 22:41:56.979643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.374 [2024-11-20 22:41:56.979709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.374 [2024-11-20 22:41:56.979724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6e6300 with addr=10.0.0.2, port=8010 00:21:56.374 [2024-11-20 22:41:56.979737] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:56.374 [2024-11-20 22:41:56.979745] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:56.374 [2024-11-20 22:41:56.979752] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:57.310 [2024-11-20 22:41:57.979633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.310 [2024-11-20 22:41:57.979837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.310 [2024-11-20 22:41:57.979862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6e6300 with addr=10.0.0.2, port=8010 00:21:57.310 [2024-11-20 22:41:57.979876] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:57.310 [2024-11-20 22:41:57.979885] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:57.310 [2024-11-20 22:41:57.979893] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:58.382 [2024-11-20 22:41:58.979570] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:58.382 2024/11/20 22:41:58 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:21:58.382 request: 00:21:58.382 { 00:21:58.382 "method": "bdev_nvme_start_discovery", 00:21:58.382 "params": { 00:21:58.382 "name": "nvme_second", 00:21:58.382 "trtype": "tcp", 00:21:58.382 "traddr": "10.0.0.2", 00:21:58.382 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:58.382 "adrfam": "ipv4", 00:21:58.382 "trsvcid": "8010", 00:21:58.382 "attach_timeout_ms": 3000 00:21:58.382 } 00:21:58.382 } 00:21:58.382 Got JSON-RPC error response 00:21:58.382 GoRPCClient: error on JSON-RPC call 00:21:58.382 22:41:58 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:58.382 22:41:58 -- common/autotest_common.sh@653 -- # es=1 00:21:58.382 22:41:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:58.382 22:41:58 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:58.382 22:41:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:58.382 22:41:58 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:21:58.382 22:41:58 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:58.382 22:41:58 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:58.382 22:41:58 -- host/discovery.sh@67 -- # xargs 00:21:58.382 22:41:58 -- host/discovery.sh@67 -- # sort 00:21:58.382 22:41:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.382 22:41:58 -- common/autotest_common.sh@10 -- # set +x 00:21:58.382 22:41:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.382 22:41:59 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:21:58.382 22:41:59 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:21:58.382 22:41:59 -- host/discovery.sh@162 -- # kill 96141 00:21:58.382 22:41:59 -- host/discovery.sh@163 -- # nvmftestfini 00:21:58.382 22:41:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:58.382 22:41:59 -- nvmf/common.sh@116 -- # sync 00:21:58.650 22:41:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:58.650 22:41:59 -- nvmf/common.sh@119 -- # set +e 00:21:58.650 22:41:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:58.650 22:41:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:58.650 rmmod nvme_tcp 00:21:58.650 rmmod nvme_fabrics 00:21:58.650 rmmod nvme_keyring 00:21:58.650 22:41:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:58.650 22:41:59 -- nvmf/common.sh@123 -- # set -e 00:21:58.650 22:41:59 -- nvmf/common.sh@124 -- # return 0 00:21:58.650 22:41:59 -- nvmf/common.sh@477 -- # '[' -n 96091 ']' 00:21:58.650 22:41:59 -- nvmf/common.sh@478 -- # killprocess 96091 00:21:58.650 22:41:59 -- common/autotest_common.sh@936 -- # '[' -z 96091 ']' 00:21:58.650 22:41:59 -- common/autotest_common.sh@940 -- # kill -0 96091 00:21:58.650 22:41:59 -- common/autotest_common.sh@941 -- # uname 00:21:58.650 22:41:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:58.650 22:41:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96091 00:21:58.650 killing process with pid 96091 00:21:58.650 22:41:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:58.650 22:41:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:58.650 22:41:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96091' 00:21:58.650 22:41:59 -- common/autotest_common.sh@955 -- # kill 96091 00:21:58.650 22:41:59 -- common/autotest_common.sh@960 -- # wait 96091 00:21:58.909 22:41:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:58.909 22:41:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:58.909 22:41:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:58.909 22:41:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:58.909 22:41:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:58.909 22:41:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.909 22:41:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.909 22:41:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.909 22:41:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:58.909 ************************************ 00:21:58.909 END TEST nvmf_discovery 00:21:58.909 ************************************ 00:21:58.909 00:21:58.909 real 0m14.109s 00:21:58.909 user 0m27.349s 00:21:58.909 sys 0m1.764s 00:21:58.909 22:41:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:58.909 22:41:59 -- common/autotest_common.sh@10 -- # set +x 00:21:58.909 22:41:59 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:58.909 22:41:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:58.909 22:41:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:58.909 22:41:59 -- common/autotest_common.sh@10 -- # set +x 00:21:58.909 ************************************ 00:21:58.909 START TEST nvmf_discovery_remove_ifc 00:21:58.909 ************************************ 00:21:58.909 22:41:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:58.909 * Looking for test storage... 00:21:58.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:58.909 22:41:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:58.909 22:41:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:58.909 22:41:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:59.169 22:41:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:59.169 22:41:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:59.169 22:41:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:59.169 22:41:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:59.169 22:41:59 -- scripts/common.sh@335 -- # IFS=.-: 00:21:59.169 22:41:59 -- scripts/common.sh@335 -- # read -ra ver1 00:21:59.169 22:41:59 -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.169 22:41:59 -- scripts/common.sh@336 -- # read -ra ver2 00:21:59.169 22:41:59 -- scripts/common.sh@337 -- # local 'op=<' 00:21:59.169 22:41:59 -- scripts/common.sh@339 -- # ver1_l=2 00:21:59.169 22:41:59 -- scripts/common.sh@340 -- # ver2_l=1 00:21:59.169 22:41:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:59.169 22:41:59 -- scripts/common.sh@343 -- # case "$op" in 00:21:59.169 22:41:59 -- scripts/common.sh@344 -- # : 1 00:21:59.169 22:41:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:59.169 22:41:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.169 22:41:59 -- scripts/common.sh@364 -- # decimal 1 00:21:59.169 22:41:59 -- scripts/common.sh@352 -- # local d=1 00:21:59.169 22:41:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.169 22:41:59 -- scripts/common.sh@354 -- # echo 1 00:21:59.169 22:41:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:59.169 22:41:59 -- scripts/common.sh@365 -- # decimal 2 00:21:59.169 22:41:59 -- scripts/common.sh@352 -- # local d=2 00:21:59.169 22:41:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.169 22:41:59 -- scripts/common.sh@354 -- # echo 2 00:21:59.169 22:41:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:59.169 22:41:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:59.169 22:41:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:59.169 22:41:59 -- scripts/common.sh@367 -- # return 0 00:21:59.169 22:41:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.169 22:41:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:59.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.169 --rc genhtml_branch_coverage=1 00:21:59.169 --rc genhtml_function_coverage=1 00:21:59.169 --rc genhtml_legend=1 00:21:59.169 --rc geninfo_all_blocks=1 00:21:59.169 --rc geninfo_unexecuted_blocks=1 00:21:59.169 00:21:59.169 ' 00:21:59.169 22:41:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:59.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.169 --rc genhtml_branch_coverage=1 00:21:59.169 --rc genhtml_function_coverage=1 00:21:59.169 --rc genhtml_legend=1 00:21:59.169 --rc geninfo_all_blocks=1 00:21:59.169 --rc geninfo_unexecuted_blocks=1 00:21:59.169 00:21:59.169 ' 00:21:59.169 22:41:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:59.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.169 --rc genhtml_branch_coverage=1 00:21:59.169 --rc genhtml_function_coverage=1 00:21:59.169 --rc genhtml_legend=1 00:21:59.169 --rc geninfo_all_blocks=1 00:21:59.169 --rc geninfo_unexecuted_blocks=1 00:21:59.169 00:21:59.169 ' 00:21:59.169 22:41:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:59.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.169 --rc genhtml_branch_coverage=1 00:21:59.169 --rc genhtml_function_coverage=1 00:21:59.169 --rc genhtml_legend=1 00:21:59.169 --rc geninfo_all_blocks=1 00:21:59.169 --rc geninfo_unexecuted_blocks=1 00:21:59.169 00:21:59.169 ' 00:21:59.169 22:41:59 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:59.169 22:41:59 -- nvmf/common.sh@7 -- # uname -s 00:21:59.169 22:41:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.169 22:41:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.169 22:41:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.169 22:41:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.169 22:41:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.169 22:41:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.169 22:41:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.169 22:41:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.169 22:41:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.169 22:41:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.169 22:41:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:21:59.169 22:41:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:21:59.169 22:41:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.169 22:41:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.169 22:41:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:59.169 22:41:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:59.169 22:41:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.169 22:41:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.169 22:41:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.169 22:41:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.169 22:41:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.169 22:41:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.169 22:41:59 -- paths/export.sh@5 -- # export PATH 00:21:59.169 22:41:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.169 22:41:59 -- nvmf/common.sh@46 -- # : 0 00:21:59.169 22:41:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:59.169 22:41:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:59.169 22:41:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:59.169 22:41:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.169 22:41:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.169 22:41:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:59.169 22:41:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:59.169 22:41:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:59.169 22:41:59 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:21:59.169 22:41:59 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:21:59.169 22:41:59 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:59.169 22:41:59 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:59.169 22:41:59 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:59.169 22:41:59 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:21:59.169 22:41:59 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:21:59.169 22:41:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:59.169 22:41:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.169 22:41:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:59.169 22:41:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:59.169 22:41:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:59.169 22:41:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.169 22:41:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.169 22:41:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.169 22:41:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:59.169 22:41:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:59.169 22:41:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:59.169 22:41:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:59.169 22:41:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:59.169 22:41:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:59.169 22:41:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.169 22:41:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.169 22:41:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:59.169 22:41:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:59.169 22:41:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:59.169 22:41:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:59.169 22:41:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:59.169 22:41:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.169 22:41:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:59.169 22:41:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:59.169 22:41:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:59.169 22:41:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:59.169 22:41:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:59.170 22:41:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:59.170 Cannot find device "nvmf_tgt_br" 00:21:59.170 22:41:59 -- nvmf/common.sh@154 -- # true 00:21:59.170 22:41:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:59.170 Cannot find device "nvmf_tgt_br2" 00:21:59.170 22:41:59 -- nvmf/common.sh@155 -- # true 00:21:59.170 22:41:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:59.170 22:41:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:59.170 Cannot find device "nvmf_tgt_br" 00:21:59.170 22:41:59 -- nvmf/common.sh@157 -- # true 00:21:59.170 22:41:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:59.170 Cannot find device "nvmf_tgt_br2" 00:21:59.170 22:41:59 -- nvmf/common.sh@158 -- # true 00:21:59.170 22:41:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:59.170 22:41:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:59.170 22:41:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:59.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.170 22:41:59 -- nvmf/common.sh@161 -- # true 00:21:59.170 22:41:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:59.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.170 22:41:59 -- nvmf/common.sh@162 -- # true 00:21:59.170 22:41:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:59.170 22:41:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:59.170 22:41:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:59.170 22:41:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:59.170 22:41:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:59.170 22:41:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:59.429 22:41:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:59.429 22:41:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:59.429 22:41:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:59.429 22:41:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:59.429 22:41:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:59.429 22:41:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:59.429 22:41:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:59.429 22:41:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:59.429 22:41:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:59.429 22:41:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:59.429 22:41:59 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:59.429 22:41:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:59.429 22:41:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:59.429 22:41:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:59.429 22:42:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:59.429 22:42:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:59.429 22:42:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:59.429 22:42:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:59.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:21:59.429 00:21:59.429 --- 10.0.0.2 ping statistics --- 00:21:59.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.429 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:21:59.429 22:42:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:59.429 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:59.429 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:21:59.429 00:21:59.429 --- 10.0.0.3 ping statistics --- 00:21:59.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.429 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:21:59.429 22:42:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:59.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:21:59.429 00:21:59.429 --- 10.0.0.1 ping statistics --- 00:21:59.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.429 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:21:59.429 22:42:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.429 22:42:00 -- nvmf/common.sh@421 -- # return 0 00:21:59.429 22:42:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:59.429 22:42:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.429 22:42:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:59.429 22:42:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:59.429 22:42:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.429 22:42:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:59.429 22:42:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:59.429 22:42:00 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:21:59.429 22:42:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:59.429 22:42:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:59.429 22:42:00 -- common/autotest_common.sh@10 -- # set +x 00:21:59.429 22:42:00 -- nvmf/common.sh@469 -- # nvmfpid=96652 00:21:59.429 22:42:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:59.429 22:42:00 -- nvmf/common.sh@470 -- # waitforlisten 96652 00:21:59.429 22:42:00 -- common/autotest_common.sh@829 -- # '[' -z 96652 ']' 00:21:59.429 22:42:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.429 22:42:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:59.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.429 22:42:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.429 22:42:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:59.429 22:42:00 -- common/autotest_common.sh@10 -- # set +x 00:21:59.429 [2024-11-20 22:42:00.129756] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:59.429 [2024-11-20 22:42:00.129856] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.688 [2024-11-20 22:42:00.267704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.688 [2024-11-20 22:42:00.332493] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:59.688 [2024-11-20 22:42:00.332645] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.688 [2024-11-20 22:42:00.332658] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.688 [2024-11-20 22:42:00.332666] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.688 [2024-11-20 22:42:00.332708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.624 22:42:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:00.624 22:42:01 -- common/autotest_common.sh@862 -- # return 0 00:22:00.624 22:42:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:00.624 22:42:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:00.624 22:42:01 -- common/autotest_common.sh@10 -- # set +x 00:22:00.624 22:42:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.624 22:42:01 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:00.624 22:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.624 22:42:01 -- common/autotest_common.sh@10 -- # set +x 00:22:00.624 [2024-11-20 22:42:01.202954] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.624 [2024-11-20 22:42:01.211096] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:00.624 null0 00:22:00.624 [2024-11-20 22:42:01.242998] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.624 22:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.624 22:42:01 -- host/discovery_remove_ifc.sh@59 -- # hostpid=96702 00:22:00.624 22:42:01 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:00.624 22:42:01 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 96702 /tmp/host.sock 00:22:00.624 22:42:01 -- common/autotest_common.sh@829 -- # '[' -z 96702 ']' 00:22:00.624 22:42:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:00.624 22:42:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:00.624 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:00.624 22:42:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:00.624 22:42:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:00.624 22:42:01 -- common/autotest_common.sh@10 -- # set +x 00:22:00.624 [2024-11-20 22:42:01.320468] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:00.624 [2024-11-20 22:42:01.320561] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96702 ] 00:22:00.883 [2024-11-20 22:42:01.462990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.883 [2024-11-20 22:42:01.540627] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:00.883 [2024-11-20 22:42:01.540856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.820 22:42:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:01.820 22:42:02 -- common/autotest_common.sh@862 -- # return 0 00:22:01.820 22:42:02 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:01.820 22:42:02 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:01.820 22:42:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.820 22:42:02 -- common/autotest_common.sh@10 -- # set +x 00:22:01.820 22:42:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.820 22:42:02 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:01.820 22:42:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.820 22:42:02 -- common/autotest_common.sh@10 -- # set +x 00:22:01.820 22:42:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.820 22:42:02 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:01.820 22:42:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.820 22:42:02 -- common/autotest_common.sh@10 -- # set +x 00:22:02.757 [2024-11-20 22:42:03.448197] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:02.757 [2024-11-20 22:42:03.448227] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:02.757 [2024-11-20 22:42:03.448244] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:03.016 [2024-11-20 22:42:03.535294] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:03.016 [2024-11-20 22:42:03.590963] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:03.016 [2024-11-20 22:42:03.591012] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:03.016 [2024-11-20 22:42:03.591038] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:03.016 [2024-11-20 22:42:03.591053] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:03.016 [2024-11-20 22:42:03.591071] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:03.016 22:42:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.016 22:42:03 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:03.016 22:42:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:03.016 [2024-11-20 22:42:03.596796] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1c4e6c0 was disconnected and freed. delete nvme_qpair. 00:22:03.016 22:42:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:03.016 22:42:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.016 22:42:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:03.016 22:42:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:03.016 22:42:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.016 22:42:03 -- common/autotest_common.sh@10 -- # set +x 00:22:03.016 22:42:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.016 22:42:03 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:03.016 22:42:03 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:22:03.016 22:42:03 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:22:03.016 22:42:03 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:03.016 22:42:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:03.016 22:42:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.016 22:42:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.016 22:42:03 -- common/autotest_common.sh@10 -- # set +x 00:22:03.016 22:42:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:03.017 22:42:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:03.017 22:42:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:03.017 22:42:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.017 22:42:03 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:03.017 22:42:03 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:04.392 22:42:04 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:04.392 22:42:04 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.392 22:42:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.392 22:42:04 -- common/autotest_common.sh@10 -- # set +x 00:22:04.392 22:42:04 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:04.392 22:42:04 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:04.392 22:42:04 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:04.392 22:42:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.392 22:42:04 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:04.392 22:42:04 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:05.330 22:42:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:05.330 22:42:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.330 22:42:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:05.330 22:42:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:05.330 22:42:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:05.330 22:42:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.330 22:42:05 -- common/autotest_common.sh@10 -- # set +x 00:22:05.330 22:42:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.330 22:42:05 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:05.330 22:42:05 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:06.266 22:42:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:06.266 22:42:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:06.266 22:42:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:06.266 22:42:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.266 22:42:06 -- common/autotest_common.sh@10 -- # set +x 00:22:06.266 22:42:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:06.266 22:42:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:06.266 22:42:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.266 22:42:06 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:06.266 22:42:06 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:07.201 22:42:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:07.201 22:42:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:07.201 22:42:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.201 22:42:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:07.201 22:42:07 -- common/autotest_common.sh@10 -- # set +x 00:22:07.201 22:42:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:07.201 22:42:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:07.201 22:42:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.459 22:42:07 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:07.459 22:42:07 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:08.397 22:42:08 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:08.397 22:42:08 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:08.397 22:42:08 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:08.397 22:42:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.397 22:42:08 -- common/autotest_common.sh@10 -- # set +x 00:22:08.397 22:42:08 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:08.397 22:42:08 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:08.397 22:42:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.397 22:42:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:08.397 22:42:09 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:08.397 [2024-11-20 22:42:09.028866] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:08.397 [2024-11-20 22:42:09.028915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.397 [2024-11-20 22:42:09.028928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.397 [2024-11-20 22:42:09.028938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.397 [2024-11-20 22:42:09.028946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.397 [2024-11-20 22:42:09.028954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.397 [2024-11-20 22:42:09.028961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.397 [2024-11-20 22:42:09.028970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.397 [2024-11-20 22:42:09.028977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.397 [2024-11-20 22:42:09.028986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.397 [2024-11-20 22:42:09.028994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.397 [2024-11-20 22:42:09.029002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2a4b0 is same with the state(5) to be set 00:22:08.397 [2024-11-20 22:42:09.038864] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2a4b0 (9): Bad file descriptor 00:22:08.397 [2024-11-20 22:42:09.048884] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:09.339 22:42:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:09.339 22:42:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.339 22:42:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:09.339 22:42:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:09.339 22:42:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:09.339 22:42:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.339 22:42:10 -- common/autotest_common.sh@10 -- # set +x 00:22:09.598 [2024-11-20 22:42:10.113398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:10.535 [2024-11-20 22:42:11.137409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:10.535 [2024-11-20 22:42:11.137487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2a4b0 with addr=10.0.0.2, port=4420 00:22:10.535 [2024-11-20 22:42:11.137512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2a4b0 is same with the state(5) to be set 00:22:10.535 [2024-11-20 22:42:11.137546] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:10.535 [2024-11-20 22:42:11.137563] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:10.535 [2024-11-20 22:42:11.137579] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:10.535 [2024-11-20 22:42:11.137595] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:10.535 [2024-11-20 22:42:11.138249] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2a4b0 (9): Bad file descriptor 00:22:10.535 [2024-11-20 22:42:11.138357] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:10.535 [2024-11-20 22:42:11.138409] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:10.535 [2024-11-20 22:42:11.138474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.535 [2024-11-20 22:42:11.138523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-11-20 22:42:11.138550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.535 [2024-11-20 22:42:11.138572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-11-20 22:42:11.138595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.535 [2024-11-20 22:42:11.138615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-11-20 22:42:11.138637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.535 [2024-11-20 22:42:11.138658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-11-20 22:42:11.138680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.535 [2024-11-20 22:42:11.138701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-11-20 22:42:11.138720] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:10.535 [2024-11-20 22:42:11.138779] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c158f0 (9): Bad file descriptor 00:22:10.535 [2024-11-20 22:42:11.139780] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:10.535 [2024-11-20 22:42:11.139833] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:10.535 22:42:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.535 22:42:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:10.535 22:42:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:11.472 22:42:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:11.472 22:42:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:11.472 22:42:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:11.472 22:42:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.472 22:42:12 -- common/autotest_common.sh@10 -- # set +x 00:22:11.472 22:42:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:11.472 22:42:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:11.472 22:42:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.730 22:42:12 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:11.730 22:42:12 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:11.730 22:42:12 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:11.730 22:42:12 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:11.730 22:42:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:11.730 22:42:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:11.730 22:42:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:11.730 22:42:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.730 22:42:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:11.730 22:42:12 -- common/autotest_common.sh@10 -- # set +x 00:22:11.730 22:42:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:11.730 22:42:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.730 22:42:12 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:11.730 22:42:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:12.667 [2024-11-20 22:42:13.149813] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:12.667 [2024-11-20 22:42:13.149832] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:12.667 [2024-11-20 22:42:13.149846] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:12.667 [2024-11-20 22:42:13.235896] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:12.667 [2024-11-20 22:42:13.290454] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:12.667 [2024-11-20 22:42:13.290491] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:12.667 [2024-11-20 22:42:13.290509] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:12.667 [2024-11-20 22:42:13.290520] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:12.667 [2024-11-20 22:42:13.290528] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:12.667 22:42:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:12.667 22:42:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:12.667 22:42:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.667 22:42:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:12.667 22:42:13 -- common/autotest_common.sh@10 -- # set +x 00:22:12.667 22:42:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:12.667 22:42:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:12.667 [2024-11-20 22:42:13.298318] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1c59330 was disconnected and freed. delete nvme_qpair. 00:22:12.667 22:42:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.667 22:42:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:12.667 22:42:13 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:12.667 22:42:13 -- host/discovery_remove_ifc.sh@90 -- # killprocess 96702 00:22:12.667 22:42:13 -- common/autotest_common.sh@936 -- # '[' -z 96702 ']' 00:22:12.667 22:42:13 -- common/autotest_common.sh@940 -- # kill -0 96702 00:22:12.667 22:42:13 -- common/autotest_common.sh@941 -- # uname 00:22:12.667 22:42:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:12.667 22:42:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96702 00:22:12.667 killing process with pid 96702 00:22:12.667 22:42:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:12.667 22:42:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:12.667 22:42:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96702' 00:22:12.667 22:42:13 -- common/autotest_common.sh@955 -- # kill 96702 00:22:12.667 22:42:13 -- common/autotest_common.sh@960 -- # wait 96702 00:22:12.927 22:42:13 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:12.927 22:42:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:12.927 22:42:13 -- nvmf/common.sh@116 -- # sync 00:22:13.186 22:42:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:13.186 22:42:13 -- nvmf/common.sh@119 -- # set +e 00:22:13.186 22:42:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:13.186 22:42:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:13.186 rmmod nvme_tcp 00:22:13.186 rmmod nvme_fabrics 00:22:13.186 rmmod nvme_keyring 00:22:13.186 22:42:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:13.186 22:42:13 -- nvmf/common.sh@123 -- # set -e 00:22:13.186 22:42:13 -- nvmf/common.sh@124 -- # return 0 00:22:13.186 22:42:13 -- nvmf/common.sh@477 -- # '[' -n 96652 ']' 00:22:13.186 22:42:13 -- nvmf/common.sh@478 -- # killprocess 96652 00:22:13.186 22:42:13 -- common/autotest_common.sh@936 -- # '[' -z 96652 ']' 00:22:13.186 22:42:13 -- common/autotest_common.sh@940 -- # kill -0 96652 00:22:13.186 22:42:13 -- common/autotest_common.sh@941 -- # uname 00:22:13.186 22:42:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:13.186 22:42:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96652 00:22:13.186 killing process with pid 96652 00:22:13.186 22:42:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:13.186 22:42:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:13.186 22:42:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96652' 00:22:13.186 22:42:13 -- common/autotest_common.sh@955 -- # kill 96652 00:22:13.186 22:42:13 -- common/autotest_common.sh@960 -- # wait 96652 00:22:13.446 22:42:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:13.446 22:42:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:13.446 22:42:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:13.446 22:42:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:13.446 22:42:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:13.446 22:42:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.446 22:42:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:13.446 22:42:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.446 22:42:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:13.446 00:22:13.446 real 0m14.518s 00:22:13.446 user 0m24.869s 00:22:13.446 sys 0m1.622s 00:22:13.446 22:42:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:13.446 22:42:14 -- common/autotest_common.sh@10 -- # set +x 00:22:13.446 ************************************ 00:22:13.446 END TEST nvmf_discovery_remove_ifc 00:22:13.446 ************************************ 00:22:13.446 22:42:14 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:22:13.446 22:42:14 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:13.446 22:42:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:13.446 22:42:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:13.446 22:42:14 -- common/autotest_common.sh@10 -- # set +x 00:22:13.446 ************************************ 00:22:13.446 START TEST nvmf_digest 00:22:13.446 ************************************ 00:22:13.446 22:42:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:13.446 * Looking for test storage... 00:22:13.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:13.446 22:42:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:13.446 22:42:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:13.446 22:42:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:13.706 22:42:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:13.706 22:42:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:13.706 22:42:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:13.706 22:42:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:13.706 22:42:14 -- scripts/common.sh@335 -- # IFS=.-: 00:22:13.706 22:42:14 -- scripts/common.sh@335 -- # read -ra ver1 00:22:13.706 22:42:14 -- scripts/common.sh@336 -- # IFS=.-: 00:22:13.706 22:42:14 -- scripts/common.sh@336 -- # read -ra ver2 00:22:13.706 22:42:14 -- scripts/common.sh@337 -- # local 'op=<' 00:22:13.706 22:42:14 -- scripts/common.sh@339 -- # ver1_l=2 00:22:13.706 22:42:14 -- scripts/common.sh@340 -- # ver2_l=1 00:22:13.706 22:42:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:13.706 22:42:14 -- scripts/common.sh@343 -- # case "$op" in 00:22:13.706 22:42:14 -- scripts/common.sh@344 -- # : 1 00:22:13.706 22:42:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:13.706 22:42:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:13.706 22:42:14 -- scripts/common.sh@364 -- # decimal 1 00:22:13.706 22:42:14 -- scripts/common.sh@352 -- # local d=1 00:22:13.706 22:42:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:13.707 22:42:14 -- scripts/common.sh@354 -- # echo 1 00:22:13.707 22:42:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:13.707 22:42:14 -- scripts/common.sh@365 -- # decimal 2 00:22:13.707 22:42:14 -- scripts/common.sh@352 -- # local d=2 00:22:13.707 22:42:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:13.707 22:42:14 -- scripts/common.sh@354 -- # echo 2 00:22:13.707 22:42:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:13.707 22:42:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:13.707 22:42:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:13.707 22:42:14 -- scripts/common.sh@367 -- # return 0 00:22:13.707 22:42:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:13.707 22:42:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:13.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.707 --rc genhtml_branch_coverage=1 00:22:13.707 --rc genhtml_function_coverage=1 00:22:13.707 --rc genhtml_legend=1 00:22:13.707 --rc geninfo_all_blocks=1 00:22:13.707 --rc geninfo_unexecuted_blocks=1 00:22:13.707 00:22:13.707 ' 00:22:13.707 22:42:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:13.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.707 --rc genhtml_branch_coverage=1 00:22:13.707 --rc genhtml_function_coverage=1 00:22:13.707 --rc genhtml_legend=1 00:22:13.707 --rc geninfo_all_blocks=1 00:22:13.707 --rc geninfo_unexecuted_blocks=1 00:22:13.707 00:22:13.707 ' 00:22:13.707 22:42:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:13.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.707 --rc genhtml_branch_coverage=1 00:22:13.707 --rc genhtml_function_coverage=1 00:22:13.707 --rc genhtml_legend=1 00:22:13.707 --rc geninfo_all_blocks=1 00:22:13.707 --rc geninfo_unexecuted_blocks=1 00:22:13.707 00:22:13.707 ' 00:22:13.707 22:42:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:13.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.707 --rc genhtml_branch_coverage=1 00:22:13.707 --rc genhtml_function_coverage=1 00:22:13.707 --rc genhtml_legend=1 00:22:13.707 --rc geninfo_all_blocks=1 00:22:13.707 --rc geninfo_unexecuted_blocks=1 00:22:13.707 00:22:13.707 ' 00:22:13.707 22:42:14 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:13.707 22:42:14 -- nvmf/common.sh@7 -- # uname -s 00:22:13.707 22:42:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.707 22:42:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.707 22:42:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.707 22:42:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.707 22:42:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:13.707 22:42:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:13.707 22:42:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.707 22:42:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:13.707 22:42:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.707 22:42:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:13.707 22:42:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:22:13.707 22:42:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:22:13.707 22:42:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.707 22:42:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:13.707 22:42:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:13.707 22:42:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:13.707 22:42:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.707 22:42:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.707 22:42:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.707 22:42:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.707 22:42:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.707 22:42:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.707 22:42:14 -- paths/export.sh@5 -- # export PATH 00:22:13.707 22:42:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.707 22:42:14 -- nvmf/common.sh@46 -- # : 0 00:22:13.707 22:42:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:13.707 22:42:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:13.707 22:42:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:13.707 22:42:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.707 22:42:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.707 22:42:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:13.707 22:42:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:13.707 22:42:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:13.707 22:42:14 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:13.707 22:42:14 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:13.707 22:42:14 -- host/digest.sh@16 -- # runtime=2 00:22:13.707 22:42:14 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:22:13.707 22:42:14 -- host/digest.sh@132 -- # nvmftestinit 00:22:13.707 22:42:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:13.707 22:42:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:13.707 22:42:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:13.707 22:42:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:13.707 22:42:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:13.707 22:42:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.707 22:42:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:13.707 22:42:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.707 22:42:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:13.707 22:42:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:13.707 22:42:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:13.707 22:42:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:13.707 22:42:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:13.707 22:42:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:13.707 22:42:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:13.707 22:42:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:13.707 22:42:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:13.707 22:42:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:13.707 22:42:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:13.707 22:42:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:13.707 22:42:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:13.707 22:42:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:13.707 22:42:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:13.707 22:42:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:13.707 22:42:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:13.707 22:42:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:13.707 22:42:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:13.707 22:42:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:13.707 Cannot find device "nvmf_tgt_br" 00:22:13.707 22:42:14 -- nvmf/common.sh@154 -- # true 00:22:13.707 22:42:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:13.707 Cannot find device "nvmf_tgt_br2" 00:22:13.707 22:42:14 -- nvmf/common.sh@155 -- # true 00:22:13.707 22:42:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:13.707 22:42:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:13.707 Cannot find device "nvmf_tgt_br" 00:22:13.707 22:42:14 -- nvmf/common.sh@157 -- # true 00:22:13.707 22:42:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:13.707 Cannot find device "nvmf_tgt_br2" 00:22:13.707 22:42:14 -- nvmf/common.sh@158 -- # true 00:22:13.707 22:42:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:13.707 22:42:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:13.707 22:42:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:13.707 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:13.707 22:42:14 -- nvmf/common.sh@161 -- # true 00:22:13.707 22:42:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:13.707 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:13.707 22:42:14 -- nvmf/common.sh@162 -- # true 00:22:13.707 22:42:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:13.707 22:42:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:13.707 22:42:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:13.707 22:42:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:13.967 22:42:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:13.967 22:42:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:13.967 22:42:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:13.967 22:42:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:13.967 22:42:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:13.967 22:42:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:13.967 22:42:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:13.967 22:42:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:13.967 22:42:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:13.967 22:42:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:13.967 22:42:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:13.967 22:42:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:13.967 22:42:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:13.967 22:42:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:13.967 22:42:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:13.967 22:42:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:13.967 22:42:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:13.967 22:42:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:13.967 22:42:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:13.967 22:42:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:13.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:13.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:22:13.967 00:22:13.967 --- 10.0.0.2 ping statistics --- 00:22:13.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.967 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:22:13.967 22:42:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:13.967 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:13.967 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:22:13.967 00:22:13.967 --- 10.0.0.3 ping statistics --- 00:22:13.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.967 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:22:13.967 22:42:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:13.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:13.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:22:13.967 00:22:13.967 --- 10.0.0.1 ping statistics --- 00:22:13.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.967 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:22:13.967 22:42:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:13.967 22:42:14 -- nvmf/common.sh@421 -- # return 0 00:22:13.967 22:42:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:13.967 22:42:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:13.967 22:42:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:13.967 22:42:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:13.967 22:42:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:13.967 22:42:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:13.967 22:42:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:13.967 22:42:14 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:13.967 22:42:14 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:22:13.967 22:42:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:13.967 22:42:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:13.967 22:42:14 -- common/autotest_common.sh@10 -- # set +x 00:22:13.967 ************************************ 00:22:13.967 START TEST nvmf_digest_clean 00:22:13.967 ************************************ 00:22:13.967 22:42:14 -- common/autotest_common.sh@1114 -- # run_digest 00:22:13.967 22:42:14 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:22:13.967 22:42:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:13.967 22:42:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:13.967 22:42:14 -- common/autotest_common.sh@10 -- # set +x 00:22:13.967 22:42:14 -- nvmf/common.sh@469 -- # nvmfpid=97124 00:22:13.967 22:42:14 -- nvmf/common.sh@470 -- # waitforlisten 97124 00:22:13.967 22:42:14 -- common/autotest_common.sh@829 -- # '[' -z 97124 ']' 00:22:13.967 22:42:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:13.967 22:42:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.967 22:42:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:13.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.967 22:42:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.967 22:42:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:13.967 22:42:14 -- common/autotest_common.sh@10 -- # set +x 00:22:13.967 [2024-11-20 22:42:14.695002] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:13.967 [2024-11-20 22:42:14.695069] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.227 [2024-11-20 22:42:14.824704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.227 [2024-11-20 22:42:14.899979] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:14.227 [2024-11-20 22:42:14.900132] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.227 [2024-11-20 22:42:14.900144] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.227 [2024-11-20 22:42:14.900152] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.227 [2024-11-20 22:42:14.900177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.164 22:42:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:15.164 22:42:15 -- common/autotest_common.sh@862 -- # return 0 00:22:15.164 22:42:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:15.164 22:42:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:15.164 22:42:15 -- common/autotest_common.sh@10 -- # set +x 00:22:15.164 22:42:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.164 22:42:15 -- host/digest.sh@120 -- # common_target_config 00:22:15.164 22:42:15 -- host/digest.sh@43 -- # rpc_cmd 00:22:15.164 22:42:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.164 22:42:15 -- common/autotest_common.sh@10 -- # set +x 00:22:15.164 null0 00:22:15.164 [2024-11-20 22:42:15.813495] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.164 [2024-11-20 22:42:15.837667] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.164 22:42:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.164 22:42:15 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:22:15.164 22:42:15 -- host/digest.sh@77 -- # local rw bs qd 00:22:15.164 22:42:15 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:15.164 22:42:15 -- host/digest.sh@80 -- # rw=randread 00:22:15.164 22:42:15 -- host/digest.sh@80 -- # bs=4096 00:22:15.164 22:42:15 -- host/digest.sh@80 -- # qd=128 00:22:15.164 22:42:15 -- host/digest.sh@82 -- # bperfpid=97174 00:22:15.164 22:42:15 -- host/digest.sh@83 -- # waitforlisten 97174 /var/tmp/bperf.sock 00:22:15.164 22:42:15 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:15.164 22:42:15 -- common/autotest_common.sh@829 -- # '[' -z 97174 ']' 00:22:15.164 22:42:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:15.164 22:42:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:15.164 22:42:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:15.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:15.164 22:42:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:15.164 22:42:15 -- common/autotest_common.sh@10 -- # set +x 00:22:15.423 [2024-11-20 22:42:15.897068] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:15.423 [2024-11-20 22:42:15.897346] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97174 ] 00:22:15.423 [2024-11-20 22:42:16.037811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.423 [2024-11-20 22:42:16.100970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.423 22:42:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:15.423 22:42:16 -- common/autotest_common.sh@862 -- # return 0 00:22:15.423 22:42:16 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:15.423 22:42:16 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:15.423 22:42:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:15.992 22:42:16 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:15.992 22:42:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:15.992 nvme0n1 00:22:15.992 22:42:16 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:15.992 22:42:16 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:16.252 Running I/O for 2 seconds... 00:22:18.157 00:22:18.157 Latency(us) 00:22:18.157 [2024-11-20T22:42:18.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.157 [2024-11-20T22:42:18.891Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:18.157 nvme0n1 : 2.00 24042.99 93.92 0.00 0.00 5318.46 2308.65 17635.14 00:22:18.157 [2024-11-20T22:42:18.891Z] =================================================================================================================== 00:22:18.157 [2024-11-20T22:42:18.891Z] Total : 24042.99 93.92 0.00 0.00 5318.46 2308.65 17635.14 00:22:18.157 0 00:22:18.157 22:42:18 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:18.157 22:42:18 -- host/digest.sh@92 -- # get_accel_stats 00:22:18.157 22:42:18 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:18.157 22:42:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:18.157 22:42:18 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:18.157 | select(.opcode=="crc32c") 00:22:18.157 | "\(.module_name) \(.executed)"' 00:22:18.416 22:42:19 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:18.416 22:42:19 -- host/digest.sh@93 -- # exp_module=software 00:22:18.416 22:42:19 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:18.416 22:42:19 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:18.416 22:42:19 -- host/digest.sh@97 -- # killprocess 97174 00:22:18.416 22:42:19 -- common/autotest_common.sh@936 -- # '[' -z 97174 ']' 00:22:18.416 22:42:19 -- common/autotest_common.sh@940 -- # kill -0 97174 00:22:18.416 22:42:19 -- common/autotest_common.sh@941 -- # uname 00:22:18.416 22:42:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:18.416 22:42:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97174 00:22:18.416 22:42:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:18.416 22:42:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:18.416 killing process with pid 97174 00:22:18.416 22:42:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97174' 00:22:18.416 Received shutdown signal, test time was about 2.000000 seconds 00:22:18.416 00:22:18.416 Latency(us) 00:22:18.416 [2024-11-20T22:42:19.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.416 [2024-11-20T22:42:19.150Z] =================================================================================================================== 00:22:18.416 [2024-11-20T22:42:19.150Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:18.416 22:42:19 -- common/autotest_common.sh@955 -- # kill 97174 00:22:18.416 22:42:19 -- common/autotest_common.sh@960 -- # wait 97174 00:22:18.686 22:42:19 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:22:18.686 22:42:19 -- host/digest.sh@77 -- # local rw bs qd 00:22:18.686 22:42:19 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:18.686 22:42:19 -- host/digest.sh@80 -- # rw=randread 00:22:18.686 22:42:19 -- host/digest.sh@80 -- # bs=131072 00:22:18.686 22:42:19 -- host/digest.sh@80 -- # qd=16 00:22:18.686 22:42:19 -- host/digest.sh@82 -- # bperfpid=97244 00:22:18.686 22:42:19 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:18.686 22:42:19 -- host/digest.sh@83 -- # waitforlisten 97244 /var/tmp/bperf.sock 00:22:18.686 22:42:19 -- common/autotest_common.sh@829 -- # '[' -z 97244 ']' 00:22:18.686 22:42:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:18.686 22:42:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:18.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:18.686 22:42:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:18.686 22:42:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:18.686 22:42:19 -- common/autotest_common.sh@10 -- # set +x 00:22:18.686 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:18.686 Zero copy mechanism will not be used. 00:22:18.686 [2024-11-20 22:42:19.384962] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:18.686 [2024-11-20 22:42:19.385050] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97244 ] 00:22:18.944 [2024-11-20 22:42:19.517959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.944 [2024-11-20 22:42:19.578052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.880 22:42:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:19.880 22:42:20 -- common/autotest_common.sh@862 -- # return 0 00:22:19.880 22:42:20 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:19.880 22:42:20 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:19.880 22:42:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:20.139 22:42:20 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:20.139 22:42:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:20.397 nvme0n1 00:22:20.397 22:42:20 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:20.397 22:42:20 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:20.397 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:20.397 Zero copy mechanism will not be used. 00:22:20.397 Running I/O for 2 seconds... 00:22:22.300 00:22:22.300 Latency(us) 00:22:22.300 [2024-11-20T22:42:23.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.300 [2024-11-20T22:42:23.034Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:22.300 nvme0n1 : 2.00 9111.31 1138.91 0.00 0.00 1753.31 707.49 10604.92 00:22:22.300 [2024-11-20T22:42:23.034Z] =================================================================================================================== 00:22:22.300 [2024-11-20T22:42:23.034Z] Total : 9111.31 1138.91 0.00 0.00 1753.31 707.49 10604.92 00:22:22.300 0 00:22:22.300 22:42:23 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:22.300 22:42:23 -- host/digest.sh@92 -- # get_accel_stats 00:22:22.300 22:42:23 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:22.300 22:42:23 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:22.300 22:42:23 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:22.300 | select(.opcode=="crc32c") 00:22:22.300 | "\(.module_name) \(.executed)"' 00:22:22.559 22:42:23 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:22.559 22:42:23 -- host/digest.sh@93 -- # exp_module=software 00:22:22.559 22:42:23 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:22.559 22:42:23 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:22.559 22:42:23 -- host/digest.sh@97 -- # killprocess 97244 00:22:22.559 22:42:23 -- common/autotest_common.sh@936 -- # '[' -z 97244 ']' 00:22:22.818 22:42:23 -- common/autotest_common.sh@940 -- # kill -0 97244 00:22:22.818 22:42:23 -- common/autotest_common.sh@941 -- # uname 00:22:22.818 22:42:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:22.818 22:42:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97244 00:22:22.818 22:42:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:22.818 22:42:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:22.818 killing process with pid 97244 00:22:22.818 22:42:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97244' 00:22:22.818 Received shutdown signal, test time was about 2.000000 seconds 00:22:22.818 00:22:22.818 Latency(us) 00:22:22.818 [2024-11-20T22:42:23.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.818 [2024-11-20T22:42:23.552Z] =================================================================================================================== 00:22:22.818 [2024-11-20T22:42:23.552Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:22.818 22:42:23 -- common/autotest_common.sh@955 -- # kill 97244 00:22:22.818 22:42:23 -- common/autotest_common.sh@960 -- # wait 97244 00:22:23.076 22:42:23 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:22:23.076 22:42:23 -- host/digest.sh@77 -- # local rw bs qd 00:22:23.076 22:42:23 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:23.076 22:42:23 -- host/digest.sh@80 -- # rw=randwrite 00:22:23.076 22:42:23 -- host/digest.sh@80 -- # bs=4096 00:22:23.076 22:42:23 -- host/digest.sh@80 -- # qd=128 00:22:23.076 22:42:23 -- host/digest.sh@82 -- # bperfpid=97336 00:22:23.076 22:42:23 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:23.076 22:42:23 -- host/digest.sh@83 -- # waitforlisten 97336 /var/tmp/bperf.sock 00:22:23.076 22:42:23 -- common/autotest_common.sh@829 -- # '[' -z 97336 ']' 00:22:23.076 22:42:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:23.076 22:42:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:23.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:23.076 22:42:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:23.076 22:42:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:23.076 22:42:23 -- common/autotest_common.sh@10 -- # set +x 00:22:23.076 [2024-11-20 22:42:23.599138] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:23.076 [2024-11-20 22:42:23.599230] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97336 ] 00:22:23.076 [2024-11-20 22:42:23.730801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.076 [2024-11-20 22:42:23.790120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.335 22:42:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:23.335 22:42:23 -- common/autotest_common.sh@862 -- # return 0 00:22:23.335 22:42:23 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:23.335 22:42:23 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:23.335 22:42:23 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:23.594 22:42:24 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:23.594 22:42:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:23.853 nvme0n1 00:22:23.853 22:42:24 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:23.853 22:42:24 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:24.111 Running I/O for 2 seconds... 00:22:26.014 00:22:26.014 Latency(us) 00:22:26.014 [2024-11-20T22:42:26.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.015 [2024-11-20T22:42:26.749Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:26.015 nvme0n1 : 2.00 28211.67 110.20 0.00 0.00 4532.95 1869.27 15490.33 00:22:26.015 [2024-11-20T22:42:26.749Z] =================================================================================================================== 00:22:26.015 [2024-11-20T22:42:26.749Z] Total : 28211.67 110.20 0.00 0.00 4532.95 1869.27 15490.33 00:22:26.015 0 00:22:26.015 22:42:26 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:26.015 22:42:26 -- host/digest.sh@92 -- # get_accel_stats 00:22:26.015 22:42:26 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:26.015 | select(.opcode=="crc32c") 00:22:26.015 | "\(.module_name) \(.executed)"' 00:22:26.015 22:42:26 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:26.015 22:42:26 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:26.273 22:42:26 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:26.273 22:42:26 -- host/digest.sh@93 -- # exp_module=software 00:22:26.273 22:42:26 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:26.273 22:42:26 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:26.273 22:42:26 -- host/digest.sh@97 -- # killprocess 97336 00:22:26.273 22:42:26 -- common/autotest_common.sh@936 -- # '[' -z 97336 ']' 00:22:26.273 22:42:26 -- common/autotest_common.sh@940 -- # kill -0 97336 00:22:26.273 22:42:26 -- common/autotest_common.sh@941 -- # uname 00:22:26.273 22:42:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:26.273 22:42:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97336 00:22:26.532 22:42:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:26.532 22:42:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:26.532 killing process with pid 97336 00:22:26.532 22:42:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97336' 00:22:26.532 Received shutdown signal, test time was about 2.000000 seconds 00:22:26.532 00:22:26.532 Latency(us) 00:22:26.532 [2024-11-20T22:42:27.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.532 [2024-11-20T22:42:27.266Z] =================================================================================================================== 00:22:26.532 [2024-11-20T22:42:27.266Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:26.532 22:42:27 -- common/autotest_common.sh@955 -- # kill 97336 00:22:26.532 22:42:27 -- common/autotest_common.sh@960 -- # wait 97336 00:22:26.532 22:42:27 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:22:26.532 22:42:27 -- host/digest.sh@77 -- # local rw bs qd 00:22:26.532 22:42:27 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:26.532 22:42:27 -- host/digest.sh@80 -- # rw=randwrite 00:22:26.532 22:42:27 -- host/digest.sh@80 -- # bs=131072 00:22:26.532 22:42:27 -- host/digest.sh@80 -- # qd=16 00:22:26.532 22:42:27 -- host/digest.sh@82 -- # bperfpid=97408 00:22:26.532 22:42:27 -- host/digest.sh@83 -- # waitforlisten 97408 /var/tmp/bperf.sock 00:22:26.532 22:42:27 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:26.532 22:42:27 -- common/autotest_common.sh@829 -- # '[' -z 97408 ']' 00:22:26.532 22:42:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:26.532 22:42:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:26.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:26.532 22:42:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:26.532 22:42:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:26.532 22:42:27 -- common/autotest_common.sh@10 -- # set +x 00:22:26.791 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:26.791 Zero copy mechanism will not be used. 00:22:26.791 [2024-11-20 22:42:27.312108] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:26.791 [2024-11-20 22:42:27.312219] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97408 ] 00:22:26.791 [2024-11-20 22:42:27.450523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.791 [2024-11-20 22:42:27.510259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.728 22:42:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:27.728 22:42:28 -- common/autotest_common.sh@862 -- # return 0 00:22:27.728 22:42:28 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:27.728 22:42:28 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:27.728 22:42:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:27.987 22:42:28 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:27.987 22:42:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:28.246 nvme0n1 00:22:28.246 22:42:28 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:28.246 22:42:28 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:28.506 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:28.506 Zero copy mechanism will not be used. 00:22:28.506 Running I/O for 2 seconds... 00:22:30.409 00:22:30.409 Latency(us) 00:22:30.409 [2024-11-20T22:42:31.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.409 [2024-11-20T22:42:31.143Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:30.409 nvme0n1 : 2.00 7655.09 956.89 0.00 0.00 2085.98 1735.21 10128.29 00:22:30.409 [2024-11-20T22:42:31.143Z] =================================================================================================================== 00:22:30.409 [2024-11-20T22:42:31.143Z] Total : 7655.09 956.89 0.00 0.00 2085.98 1735.21 10128.29 00:22:30.409 0 00:22:30.409 22:42:31 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:30.409 22:42:31 -- host/digest.sh@92 -- # get_accel_stats 00:22:30.409 22:42:31 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:30.409 22:42:31 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:30.409 | select(.opcode=="crc32c") 00:22:30.409 | "\(.module_name) \(.executed)"' 00:22:30.409 22:42:31 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:30.668 22:42:31 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:30.668 22:42:31 -- host/digest.sh@93 -- # exp_module=software 00:22:30.668 22:42:31 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:30.668 22:42:31 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:30.668 22:42:31 -- host/digest.sh@97 -- # killprocess 97408 00:22:30.668 22:42:31 -- common/autotest_common.sh@936 -- # '[' -z 97408 ']' 00:22:30.668 22:42:31 -- common/autotest_common.sh@940 -- # kill -0 97408 00:22:30.668 22:42:31 -- common/autotest_common.sh@941 -- # uname 00:22:30.668 22:42:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:30.668 22:42:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97408 00:22:30.668 22:42:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:30.668 22:42:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:30.668 killing process with pid 97408 00:22:30.668 22:42:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97408' 00:22:30.668 Received shutdown signal, test time was about 2.000000 seconds 00:22:30.668 00:22:30.668 Latency(us) 00:22:30.668 [2024-11-20T22:42:31.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.668 [2024-11-20T22:42:31.402Z] =================================================================================================================== 00:22:30.668 [2024-11-20T22:42:31.402Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:30.668 22:42:31 -- common/autotest_common.sh@955 -- # kill 97408 00:22:30.668 22:42:31 -- common/autotest_common.sh@960 -- # wait 97408 00:22:30.927 22:42:31 -- host/digest.sh@126 -- # killprocess 97124 00:22:30.927 22:42:31 -- common/autotest_common.sh@936 -- # '[' -z 97124 ']' 00:22:30.927 22:42:31 -- common/autotest_common.sh@940 -- # kill -0 97124 00:22:30.927 22:42:31 -- common/autotest_common.sh@941 -- # uname 00:22:30.927 22:42:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:30.927 22:42:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97124 00:22:30.927 22:42:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:30.927 22:42:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:30.927 killing process with pid 97124 00:22:30.927 22:42:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97124' 00:22:30.927 22:42:31 -- common/autotest_common.sh@955 -- # kill 97124 00:22:30.927 22:42:31 -- common/autotest_common.sh@960 -- # wait 97124 00:22:31.186 00:22:31.186 real 0m17.186s 00:22:31.186 user 0m30.933s 00:22:31.186 sys 0m5.422s 00:22:31.186 22:42:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:31.186 22:42:31 -- common/autotest_common.sh@10 -- # set +x 00:22:31.186 ************************************ 00:22:31.186 END TEST nvmf_digest_clean 00:22:31.186 ************************************ 00:22:31.186 22:42:31 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:22:31.186 22:42:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:31.186 22:42:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:31.186 22:42:31 -- common/autotest_common.sh@10 -- # set +x 00:22:31.186 ************************************ 00:22:31.186 START TEST nvmf_digest_error 00:22:31.186 ************************************ 00:22:31.186 22:42:31 -- common/autotest_common.sh@1114 -- # run_digest_error 00:22:31.186 22:42:31 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:22:31.186 22:42:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:31.186 22:42:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:31.186 22:42:31 -- common/autotest_common.sh@10 -- # set +x 00:22:31.186 22:42:31 -- nvmf/common.sh@469 -- # nvmfpid=97527 00:22:31.186 22:42:31 -- nvmf/common.sh@470 -- # waitforlisten 97527 00:22:31.186 22:42:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:31.186 22:42:31 -- common/autotest_common.sh@829 -- # '[' -z 97527 ']' 00:22:31.186 22:42:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.186 22:42:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:31.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.186 22:42:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.186 22:42:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:31.186 22:42:31 -- common/autotest_common.sh@10 -- # set +x 00:22:31.445 [2024-11-20 22:42:31.947532] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:31.445 [2024-11-20 22:42:31.947622] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.445 [2024-11-20 22:42:32.085750] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.445 [2024-11-20 22:42:32.144940] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:31.445 [2024-11-20 22:42:32.145104] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.445 [2024-11-20 22:42:32.145116] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.445 [2024-11-20 22:42:32.145125] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.445 [2024-11-20 22:42:32.145150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.704 22:42:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:31.704 22:42:32 -- common/autotest_common.sh@862 -- # return 0 00:22:31.704 22:42:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:31.704 22:42:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:31.704 22:42:32 -- common/autotest_common.sh@10 -- # set +x 00:22:31.704 22:42:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.704 22:42:32 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:31.704 22:42:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.704 22:42:32 -- common/autotest_common.sh@10 -- # set +x 00:22:31.704 [2024-11-20 22:42:32.229609] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:31.704 22:42:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.704 22:42:32 -- host/digest.sh@104 -- # common_target_config 00:22:31.704 22:42:32 -- host/digest.sh@43 -- # rpc_cmd 00:22:31.704 22:42:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.704 22:42:32 -- common/autotest_common.sh@10 -- # set +x 00:22:31.704 null0 00:22:31.704 [2024-11-20 22:42:32.335338] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.704 [2024-11-20 22:42:32.359449] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.704 22:42:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.704 22:42:32 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:22:31.704 22:42:32 -- host/digest.sh@54 -- # local rw bs qd 00:22:31.704 22:42:32 -- host/digest.sh@56 -- # rw=randread 00:22:31.704 22:42:32 -- host/digest.sh@56 -- # bs=4096 00:22:31.704 22:42:32 -- host/digest.sh@56 -- # qd=128 00:22:31.704 22:42:32 -- host/digest.sh@58 -- # bperfpid=97552 00:22:31.704 22:42:32 -- host/digest.sh@60 -- # waitforlisten 97552 /var/tmp/bperf.sock 00:22:31.704 22:42:32 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:31.704 22:42:32 -- common/autotest_common.sh@829 -- # '[' -z 97552 ']' 00:22:31.704 22:42:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:31.704 22:42:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:31.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:31.704 22:42:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:31.704 22:42:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:31.704 22:42:32 -- common/autotest_common.sh@10 -- # set +x 00:22:31.704 [2024-11-20 22:42:32.423130] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:31.704 [2024-11-20 22:42:32.423234] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97552 ] 00:22:31.963 [2024-11-20 22:42:32.562076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.963 [2024-11-20 22:42:32.643012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.900 22:42:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:32.900 22:42:33 -- common/autotest_common.sh@862 -- # return 0 00:22:32.900 22:42:33 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:32.900 22:42:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:33.160 22:42:33 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:33.160 22:42:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.160 22:42:33 -- common/autotest_common.sh@10 -- # set +x 00:22:33.160 22:42:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.160 22:42:33 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:33.160 22:42:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:33.419 nvme0n1 00:22:33.419 22:42:33 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:33.419 22:42:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.419 22:42:33 -- common/autotest_common.sh@10 -- # set +x 00:22:33.419 22:42:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.419 22:42:33 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:33.419 22:42:33 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:33.419 Running I/O for 2 seconds... 00:22:33.419 [2024-11-20 22:42:34.044787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.419 [2024-11-20 22:42:34.044839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.419 [2024-11-20 22:42:34.044858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.419 [2024-11-20 22:42:34.056808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.419 [2024-11-20 22:42:34.056846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.419 [2024-11-20 22:42:34.056869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.419 [2024-11-20 22:42:34.068610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.419 [2024-11-20 22:42:34.068648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.419 [2024-11-20 22:42:34.068661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.419 [2024-11-20 22:42:34.080755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.419 [2024-11-20 22:42:34.080792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.419 [2024-11-20 22:42:34.080815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.419 [2024-11-20 22:42:34.092994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.419 [2024-11-20 22:42:34.093030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.419 [2024-11-20 22:42:34.093053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.419 [2024-11-20 22:42:34.104756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.419 [2024-11-20 22:42:34.104793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.419 [2024-11-20 22:42:34.104816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.419 [2024-11-20 22:42:34.116631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.419 [2024-11-20 22:42:34.116672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.419 [2024-11-20 22:42:34.116685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.419 [2024-11-20 22:42:34.128526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.419 [2024-11-20 22:42:34.128562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.419 [2024-11-20 22:42:34.128587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.419 [2024-11-20 22:42:34.140123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.419 [2024-11-20 22:42:34.140160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.419 [2024-11-20 22:42:34.140187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.679 [2024-11-20 22:42:34.152898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.679 [2024-11-20 22:42:34.152935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.679 [2024-11-20 22:42:34.152949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.679 [2024-11-20 22:42:34.164823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.679 [2024-11-20 22:42:34.164862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.679 [2024-11-20 22:42:34.164886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.679 [2024-11-20 22:42:34.176659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.679 [2024-11-20 22:42:34.176695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.679 [2024-11-20 22:42:34.176716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.679 [2024-11-20 22:42:34.188830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.679 [2024-11-20 22:42:34.188867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.679 [2024-11-20 22:42:34.188888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.679 [2024-11-20 22:42:34.200618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.679 [2024-11-20 22:42:34.200655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.680 [2024-11-20 22:42:34.200679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.680 [2024-11-20 22:42:34.212423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.680 [2024-11-20 22:42:34.212459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.680 [2024-11-20 22:42:34.212472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.680 [2024-11-20 22:42:34.224537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.680 [2024-11-20 22:42:34.224577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.680 [2024-11-20 22:42:34.224599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.680 [2024-11-20 22:42:34.236520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.680 [2024-11-20 22:42:34.236557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.680 [2024-11-20 22:42:34.236570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.680 [2024-11-20 22:42:34.248490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.680 [2024-11-20 22:42:34.248528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.680 [2024-11-20 22:42:34.248552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.680 [2024-11-20 22:42:34.260506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.680 [2024-11-20 22:42:34.260542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.680 [2024-11-20 22:42:34.260555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.680 [2024-11-20 22:42:34.272469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.680 [2024-11-20 22:42:34.272505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.680 [2024-11-20 22:42:34.272528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.680 [2024-11-20 22:42:34.284511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.680 [2024-11-20 22:42:34.284547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.680 [2024-11-20 22:42:34.284569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.680 [2024-11-20 22:42:34.296539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.680 [2024-11-20 22:42:34.296575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.680 [2024-11-20 22:42:34.296600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.680 [2024-11-20 22:42:34.308968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.680 [2024-11-20 22:42:34.309007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.680 [2024-11-20 22:42:34.309029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.680 [2024-11-20 22:42:34.321011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.680 [2024-11-20 22:42:34.321048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.680 [2024-11-20 22:42:34.321073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.680 [2024-11-20 22:42:34.332867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.680 [2024-11-20 22:42:34.332906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.680 [2024-11-20 22:42:34.332927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.680 [2024-11-20 22:42:34.344777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.680 [2024-11-20 22:42:34.344814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.680 [2024-11-20 22:42:34.344835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.680 [2024-11-20 22:42:34.356587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.680 [2024-11-20 22:42:34.356623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.680 [2024-11-20 22:42:34.356648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.680 [2024-11-20 22:42:34.368870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.680 [2024-11-20 22:42:34.368907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.680 [2024-11-20 22:42:34.368929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.680 [2024-11-20 22:42:34.380659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.680 [2024-11-20 22:42:34.380697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.680 [2024-11-20 22:42:34.380720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.680 [2024-11-20 22:42:34.392427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.680 [2024-11-20 22:42:34.392465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.680 [2024-11-20 22:42:34.392478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.680 [2024-11-20 22:42:34.404240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.680 [2024-11-20 22:42:34.404299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.680 [2024-11-20 22:42:34.404321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.940 [2024-11-20 22:42:34.416966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.940 [2024-11-20 22:42:34.417003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.940 [2024-11-20 22:42:34.417025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.940 [2024-11-20 22:42:34.428939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.940 [2024-11-20 22:42:34.428978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.941 [2024-11-20 22:42:34.428990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.941 [2024-11-20 22:42:34.440829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.941 [2024-11-20 22:42:34.440866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.941 [2024-11-20 22:42:34.440879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.941 [2024-11-20 22:42:34.453335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.941 [2024-11-20 22:42:34.453371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.941 [2024-11-20 22:42:34.453394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.941 [2024-11-20 22:42:34.465406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.941 [2024-11-20 22:42:34.465443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.941 [2024-11-20 22:42:34.465466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.941 [2024-11-20 22:42:34.478104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.941 [2024-11-20 22:42:34.478142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.941 [2024-11-20 22:42:34.478165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.941 [2024-11-20 22:42:34.490003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.941 [2024-11-20 22:42:34.490050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.941 [2024-11-20 22:42:34.490063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.941 [2024-11-20 22:42:34.501875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.941 [2024-11-20 22:42:34.501913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.941 [2024-11-20 22:42:34.501926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.941 [2024-11-20 22:42:34.513802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.941 [2024-11-20 22:42:34.513856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.941 [2024-11-20 22:42:34.513871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.941 [2024-11-20 22:42:34.523066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.941 [2024-11-20 22:42:34.523104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.941 [2024-11-20 22:42:34.523117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.941 [2024-11-20 22:42:34.533414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.941 [2024-11-20 22:42:34.533452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.941 [2024-11-20 22:42:34.533477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.941 [2024-11-20 22:42:34.545346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.941 [2024-11-20 22:42:34.545382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.941 [2024-11-20 22:42:34.545395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.941 [2024-11-20 22:42:34.557231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.941 [2024-11-20 22:42:34.557269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.941 [2024-11-20 22:42:34.557294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.941 [2024-11-20 22:42:34.569084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.941 [2024-11-20 22:42:34.569121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.941 [2024-11-20 22:42:34.569133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.941 [2024-11-20 22:42:34.585335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.941 [2024-11-20 22:42:34.585370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.941 [2024-11-20 22:42:34.585394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.941 [2024-11-20 22:42:34.597133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.941 [2024-11-20 22:42:34.597169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.941 [2024-11-20 22:42:34.597191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.941 [2024-11-20 22:42:34.608948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.941 [2024-11-20 22:42:34.608985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.941 [2024-11-20 22:42:34.608997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.941 [2024-11-20 22:42:34.620642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.941 [2024-11-20 22:42:34.620678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.941 [2024-11-20 22:42:34.620692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.941 [2024-11-20 22:42:34.632429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.941 [2024-11-20 22:42:34.632465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.941 [2024-11-20 22:42:34.632489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.941 [2024-11-20 22:42:34.644002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.941 [2024-11-20 22:42:34.644040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.941 [2024-11-20 22:42:34.644063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.941 [2024-11-20 22:42:34.656243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.941 [2024-11-20 22:42:34.656299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.941 [2024-11-20 22:42:34.656314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.941 [2024-11-20 22:42:34.667981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:33.941 [2024-11-20 22:42:34.668030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.941 [2024-11-20 22:42:34.668043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.201 [2024-11-20 22:42:34.680619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.201 [2024-11-20 22:42:34.680656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.201 [2024-11-20 22:42:34.680678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.201 [2024-11-20 22:42:34.692379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.201 [2024-11-20 22:42:34.692415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.201 [2024-11-20 22:42:34.692428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.201 [2024-11-20 22:42:34.704109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.201 [2024-11-20 22:42:34.704146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.201 [2024-11-20 22:42:34.704168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.201 [2024-11-20 22:42:34.716159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.201 [2024-11-20 22:42:34.716196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.201 [2024-11-20 22:42:34.716216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.201 [2024-11-20 22:42:34.723918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.201 [2024-11-20 22:42:34.723957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.201 [2024-11-20 22:42:34.723970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.201 [2024-11-20 22:42:34.736306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.201 [2024-11-20 22:42:34.736342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.201 [2024-11-20 22:42:34.736355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.201 [2024-11-20 22:42:34.749603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.201 [2024-11-20 22:42:34.749647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.201 [2024-11-20 22:42:34.749661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.201 [2024-11-20 22:42:34.762558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.201 [2024-11-20 22:42:34.762606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.201 [2024-11-20 22:42:34.762626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.201 [2024-11-20 22:42:34.776347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.201 [2024-11-20 22:42:34.776396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.201 [2024-11-20 22:42:34.776416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.201 [2024-11-20 22:42:34.788958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.201 [2024-11-20 22:42:34.788995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.201 [2024-11-20 22:42:34.789018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.201 [2024-11-20 22:42:34.799461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.201 [2024-11-20 22:42:34.799499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.201 [2024-11-20 22:42:34.799524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.201 [2024-11-20 22:42:34.808664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.201 [2024-11-20 22:42:34.808701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.201 [2024-11-20 22:42:34.808724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.201 [2024-11-20 22:42:34.818151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.201 [2024-11-20 22:42:34.818188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.201 [2024-11-20 22:42:34.818209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.201 [2024-11-20 22:42:34.829899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.201 [2024-11-20 22:42:34.829938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.201 [2024-11-20 22:42:34.829968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.201 [2024-11-20 22:42:34.842378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.201 [2024-11-20 22:42:34.842414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.201 [2024-11-20 22:42:34.842427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.201 [2024-11-20 22:42:34.853355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.201 [2024-11-20 22:42:34.853402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.201 [2024-11-20 22:42:34.853422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.201 [2024-11-20 22:42:34.865062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.201 [2024-11-20 22:42:34.865112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.201 [2024-11-20 22:42:34.865131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.201 [2024-11-20 22:42:34.874736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.201 [2024-11-20 22:42:34.874784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.201 [2024-11-20 22:42:34.874804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.201 [2024-11-20 22:42:34.885810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.202 [2024-11-20 22:42:34.885846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.202 [2024-11-20 22:42:34.885859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.202 [2024-11-20 22:42:34.896677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.202 [2024-11-20 22:42:34.896725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.202 [2024-11-20 22:42:34.896746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.202 [2024-11-20 22:42:34.909331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.202 [2024-11-20 22:42:34.909389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.202 [2024-11-20 22:42:34.909402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.202 [2024-11-20 22:42:34.917766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.202 [2024-11-20 22:42:34.917816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.202 [2024-11-20 22:42:34.917833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.202 [2024-11-20 22:42:34.929007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.202 [2024-11-20 22:42:34.929054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.202 [2024-11-20 22:42:34.929074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.462 [2024-11-20 22:42:34.942619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.462 [2024-11-20 22:42:34.942667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.462 [2024-11-20 22:42:34.942688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.462 [2024-11-20 22:42:34.959187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.462 [2024-11-20 22:42:34.959225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.462 [2024-11-20 22:42:34.959238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.462 [2024-11-20 22:42:34.970761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.462 [2024-11-20 22:42:34.970808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.462 [2024-11-20 22:42:34.970822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.462 [2024-11-20 22:42:34.980537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.462 [2024-11-20 22:42:34.980573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.462 [2024-11-20 22:42:34.980585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.462 [2024-11-20 22:42:34.993413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.462 [2024-11-20 22:42:34.993467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.462 [2024-11-20 22:42:34.993479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.462 [2024-11-20 22:42:35.006858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.462 [2024-11-20 22:42:35.006895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.462 [2024-11-20 22:42:35.006909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.462 [2024-11-20 22:42:35.019960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.462 [2024-11-20 22:42:35.019996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.462 [2024-11-20 22:42:35.020009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.462 [2024-11-20 22:42:35.032259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.462 [2024-11-20 22:42:35.032329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.462 [2024-11-20 22:42:35.032344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.462 [2024-11-20 22:42:35.044218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.462 [2024-11-20 22:42:35.044263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.462 [2024-11-20 22:42:35.044303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.462 [2024-11-20 22:42:35.056609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.462 [2024-11-20 22:42:35.056647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.462 [2024-11-20 22:42:35.056668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.462 [2024-11-20 22:42:35.065198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.462 [2024-11-20 22:42:35.065235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.462 [2024-11-20 22:42:35.065248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.462 [2024-11-20 22:42:35.077439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.462 [2024-11-20 22:42:35.077475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.462 [2024-11-20 22:42:35.077488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.462 [2024-11-20 22:42:35.089335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.462 [2024-11-20 22:42:35.089371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.462 [2024-11-20 22:42:35.089384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.463 [2024-11-20 22:42:35.101299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.463 [2024-11-20 22:42:35.101335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.463 [2024-11-20 22:42:35.101348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.463 [2024-11-20 22:42:35.113240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.463 [2024-11-20 22:42:35.113296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.463 [2024-11-20 22:42:35.113312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.463 [2024-11-20 22:42:35.125188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.463 [2024-11-20 22:42:35.125225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.463 [2024-11-20 22:42:35.125238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.463 [2024-11-20 22:42:35.137092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.463 [2024-11-20 22:42:35.137130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.463 [2024-11-20 22:42:35.137142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.463 [2024-11-20 22:42:35.151380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.463 [2024-11-20 22:42:35.151434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.463 [2024-11-20 22:42:35.151446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.463 [2024-11-20 22:42:35.160834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.463 [2024-11-20 22:42:35.160872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.463 [2024-11-20 22:42:35.160885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.463 [2024-11-20 22:42:35.171467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.463 [2024-11-20 22:42:35.171504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.463 [2024-11-20 22:42:35.171516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.463 [2024-11-20 22:42:35.180849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.463 [2024-11-20 22:42:35.180887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.463 [2024-11-20 22:42:35.180900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.463 [2024-11-20 22:42:35.190569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.463 [2024-11-20 22:42:35.190620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.463 [2024-11-20 22:42:35.190632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.723 [2024-11-20 22:42:35.201045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.723 [2024-11-20 22:42:35.201083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.723 [2024-11-20 22:42:35.201106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.723 [2024-11-20 22:42:35.210478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.723 [2024-11-20 22:42:35.210516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.723 [2024-11-20 22:42:35.210541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.723 [2024-11-20 22:42:35.219785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.723 [2024-11-20 22:42:35.219822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.723 [2024-11-20 22:42:35.219835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.723 [2024-11-20 22:42:35.228389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.723 [2024-11-20 22:42:35.228424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.723 [2024-11-20 22:42:35.228437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.723 [2024-11-20 22:42:35.238253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.723 [2024-11-20 22:42:35.238301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.723 [2024-11-20 22:42:35.238316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.723 [2024-11-20 22:42:35.248662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.723 [2024-11-20 22:42:35.248698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.723 [2024-11-20 22:42:35.248711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.723 [2024-11-20 22:42:35.256992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.723 [2024-11-20 22:42:35.257029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.723 [2024-11-20 22:42:35.257050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.723 [2024-11-20 22:42:35.267658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.723 [2024-11-20 22:42:35.267695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.723 [2024-11-20 22:42:35.267709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.723 [2024-11-20 22:42:35.277025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.723 [2024-11-20 22:42:35.277063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.723 [2024-11-20 22:42:35.277076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.723 [2024-11-20 22:42:35.285474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.723 [2024-11-20 22:42:35.285512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.723 [2024-11-20 22:42:35.285524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.723 [2024-11-20 22:42:35.295421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.723 [2024-11-20 22:42:35.295457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.723 [2024-11-20 22:42:35.295480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.723 [2024-11-20 22:42:35.306207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.723 [2024-11-20 22:42:35.306244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.723 [2024-11-20 22:42:35.306265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.723 [2024-11-20 22:42:35.315091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.723 [2024-11-20 22:42:35.315127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.723 [2024-11-20 22:42:35.315148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.723 [2024-11-20 22:42:35.327464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.724 [2024-11-20 22:42:35.327500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.724 [2024-11-20 22:42:35.327525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.724 [2024-11-20 22:42:35.339290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.724 [2024-11-20 22:42:35.339325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.724 [2024-11-20 22:42:35.339338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.724 [2024-11-20 22:42:35.351685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.724 [2024-11-20 22:42:35.351722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.724 [2024-11-20 22:42:35.351743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.724 [2024-11-20 22:42:35.364744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.724 [2024-11-20 22:42:35.364781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.724 [2024-11-20 22:42:35.364804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.724 [2024-11-20 22:42:35.377268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.724 [2024-11-20 22:42:35.377325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.724 [2024-11-20 22:42:35.377339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.724 [2024-11-20 22:42:35.387995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.724 [2024-11-20 22:42:35.388034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.724 [2024-11-20 22:42:35.388055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.724 [2024-11-20 22:42:35.398407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.724 [2024-11-20 22:42:35.398444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.724 [2024-11-20 22:42:35.398457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.724 [2024-11-20 22:42:35.408227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.724 [2024-11-20 22:42:35.408265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.724 [2024-11-20 22:42:35.408298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.724 [2024-11-20 22:42:35.417414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.724 [2024-11-20 22:42:35.417451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.724 [2024-11-20 22:42:35.417472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.724 [2024-11-20 22:42:35.427000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.724 [2024-11-20 22:42:35.427037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.724 [2024-11-20 22:42:35.427050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.724 [2024-11-20 22:42:35.436978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.724 [2024-11-20 22:42:35.437015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.724 [2024-11-20 22:42:35.437029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.724 [2024-11-20 22:42:35.445481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.724 [2024-11-20 22:42:35.445518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.724 [2024-11-20 22:42:35.445530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.985 [2024-11-20 22:42:35.456904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.985 [2024-11-20 22:42:35.456940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.985 [2024-11-20 22:42:35.456953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.985 [2024-11-20 22:42:35.469642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.985 [2024-11-20 22:42:35.469680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.985 [2024-11-20 22:42:35.469701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.985 [2024-11-20 22:42:35.481418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.985 [2024-11-20 22:42:35.481455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.985 [2024-11-20 22:42:35.481480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.985 [2024-11-20 22:42:35.490967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.985 [2024-11-20 22:42:35.491004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.985 [2024-11-20 22:42:35.491027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.985 [2024-11-20 22:42:35.499922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.985 [2024-11-20 22:42:35.499959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.985 [2024-11-20 22:42:35.499972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.985 [2024-11-20 22:42:35.511643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.985 [2024-11-20 22:42:35.511680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.985 [2024-11-20 22:42:35.511706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.985 [2024-11-20 22:42:35.523928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.985 [2024-11-20 22:42:35.523966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.985 [2024-11-20 22:42:35.523979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.985 [2024-11-20 22:42:35.535762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.985 [2024-11-20 22:42:35.535799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.985 [2024-11-20 22:42:35.535812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.985 [2024-11-20 22:42:35.550818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.985 [2024-11-20 22:42:35.550854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.985 [2024-11-20 22:42:35.550875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.985 [2024-11-20 22:42:35.559248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.985 [2024-11-20 22:42:35.559300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.985 [2024-11-20 22:42:35.559319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.985 [2024-11-20 22:42:35.571024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.985 [2024-11-20 22:42:35.571064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.985 [2024-11-20 22:42:35.571084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.985 [2024-11-20 22:42:35.583379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.985 [2024-11-20 22:42:35.583416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.985 [2024-11-20 22:42:35.583429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.985 [2024-11-20 22:42:35.595151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.985 [2024-11-20 22:42:35.595187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.985 [2024-11-20 22:42:35.595200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.985 [2024-11-20 22:42:35.607284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.985 [2024-11-20 22:42:35.607320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.985 [2024-11-20 22:42:35.607333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.985 [2024-11-20 22:42:35.619117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.985 [2024-11-20 22:42:35.619154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.985 [2024-11-20 22:42:35.619178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.985 [2024-11-20 22:42:35.630949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.985 [2024-11-20 22:42:35.630986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.985 [2024-11-20 22:42:35.631008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.985 [2024-11-20 22:42:35.642791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.985 [2024-11-20 22:42:35.642829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.985 [2024-11-20 22:42:35.642854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.985 [2024-11-20 22:42:35.654710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.985 [2024-11-20 22:42:35.654747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.985 [2024-11-20 22:42:35.654760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.985 [2024-11-20 22:42:35.666848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.985 [2024-11-20 22:42:35.666886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.985 [2024-11-20 22:42:35.666898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.985 [2024-11-20 22:42:35.677445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.985 [2024-11-20 22:42:35.677483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.986 [2024-11-20 22:42:35.677506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.986 [2024-11-20 22:42:35.686686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.986 [2024-11-20 22:42:35.686723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.986 [2024-11-20 22:42:35.686744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.986 [2024-11-20 22:42:35.696844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.986 [2024-11-20 22:42:35.696882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.986 [2024-11-20 22:42:35.696903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.986 [2024-11-20 22:42:35.706303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:34.986 [2024-11-20 22:42:35.706339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.986 [2024-11-20 22:42:35.706360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.245 [2024-11-20 22:42:35.718767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.246 [2024-11-20 22:42:35.718817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-20 22:42:35.718830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-20 22:42:35.732343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.246 [2024-11-20 22:42:35.732379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-20 22:42:35.732392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-20 22:42:35.744173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.246 [2024-11-20 22:42:35.744210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-20 22:42:35.744233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-20 22:42:35.755918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.246 [2024-11-20 22:42:35.755954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-20 22:42:35.755976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-20 22:42:35.767890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.246 [2024-11-20 22:42:35.767935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-20 22:42:35.767948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-20 22:42:35.780524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.246 [2024-11-20 22:42:35.780575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-20 22:42:35.780594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-20 22:42:35.793484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.246 [2024-11-20 22:42:35.793521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-20 22:42:35.793540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-20 22:42:35.805378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.246 [2024-11-20 22:42:35.805414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-20 22:42:35.805427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-20 22:42:35.817580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.246 [2024-11-20 22:42:35.817616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-20 22:42:35.817638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-20 22:42:35.829385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.246 [2024-11-20 22:42:35.829421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-20 22:42:35.829445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-20 22:42:35.841301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.246 [2024-11-20 22:42:35.841337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-20 22:42:35.841350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-20 22:42:35.853106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.246 [2024-11-20 22:42:35.853143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-20 22:42:35.853156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-20 22:42:35.864924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.246 [2024-11-20 22:42:35.864963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-20 22:42:35.864976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-20 22:42:35.876760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.246 [2024-11-20 22:42:35.876796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-20 22:42:35.876809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-20 22:42:35.888583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.246 [2024-11-20 22:42:35.888622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-20 22:42:35.888642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-20 22:42:35.900376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.246 [2024-11-20 22:42:35.900412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-20 22:42:35.900436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-20 22:42:35.912230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.246 [2024-11-20 22:42:35.912268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-20 22:42:35.912302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-20 22:42:35.924050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.246 [2024-11-20 22:42:35.924087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-20 22:42:35.924100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-20 22:42:35.935845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.246 [2024-11-20 22:42:35.935881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-20 22:42:35.935902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-20 22:42:35.947500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.246 [2024-11-20 22:42:35.947537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-20 22:42:35.947562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-20 22:42:35.959695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.246 [2024-11-20 22:42:35.959732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-20 22:42:35.959745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.246 [2024-11-20 22:42:35.971593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.246 [2024-11-20 22:42:35.971630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.246 [2024-11-20 22:42:35.971652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.505 [2024-11-20 22:42:35.984101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.505 [2024-11-20 22:42:35.984139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-11-20 22:42:35.984162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.505 [2024-11-20 22:42:35.997071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.505 [2024-11-20 22:42:35.997109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-11-20 22:42:35.997131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.505 [2024-11-20 22:42:36.008784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.505 [2024-11-20 22:42:36.008822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-11-20 22:42:36.008835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.506 [2024-11-20 22:42:36.016628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.506 [2024-11-20 22:42:36.016665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-11-20 22:42:36.016686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.506 [2024-11-20 22:42:36.028269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13697f0) 00:22:35.506 [2024-11-20 22:42:36.028317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-11-20 22:42:36.028339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.506 00:22:35.506 Latency(us) 00:22:35.506 [2024-11-20T22:42:36.240Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.506 [2024-11-20T22:42:36.240Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:35.506 nvme0n1 : 2.00 22052.42 86.14 0.00 0.00 5799.09 2383.13 17277.67 00:22:35.506 [2024-11-20T22:42:36.240Z] =================================================================================================================== 00:22:35.506 [2024-11-20T22:42:36.240Z] Total : 22052.42 86.14 0.00 0.00 5799.09 2383.13 17277.67 00:22:35.506 0 00:22:35.506 22:42:36 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:35.506 22:42:36 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:35.506 22:42:36 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:35.506 22:42:36 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:35.506 | .driver_specific 00:22:35.506 | .nvme_error 00:22:35.506 | .status_code 00:22:35.506 | .command_transient_transport_error' 00:22:35.765 22:42:36 -- host/digest.sh@71 -- # (( 173 > 0 )) 00:22:35.765 22:42:36 -- host/digest.sh@73 -- # killprocess 97552 00:22:35.765 22:42:36 -- common/autotest_common.sh@936 -- # '[' -z 97552 ']' 00:22:35.765 22:42:36 -- common/autotest_common.sh@940 -- # kill -0 97552 00:22:35.765 22:42:36 -- common/autotest_common.sh@941 -- # uname 00:22:35.765 22:42:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:35.765 22:42:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97552 00:22:35.765 22:42:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:35.765 killing process with pid 97552 00:22:35.765 22:42:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:35.765 22:42:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97552' 00:22:35.765 Received shutdown signal, test time was about 2.000000 seconds 00:22:35.765 00:22:35.765 Latency(us) 00:22:35.765 [2024-11-20T22:42:36.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.765 [2024-11-20T22:42:36.499Z] =================================================================================================================== 00:22:35.765 [2024-11-20T22:42:36.499Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:35.765 22:42:36 -- common/autotest_common.sh@955 -- # kill 97552 00:22:35.765 22:42:36 -- common/autotest_common.sh@960 -- # wait 97552 00:22:36.024 22:42:36 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:22:36.024 22:42:36 -- host/digest.sh@54 -- # local rw bs qd 00:22:36.024 22:42:36 -- host/digest.sh@56 -- # rw=randread 00:22:36.024 22:42:36 -- host/digest.sh@56 -- # bs=131072 00:22:36.024 22:42:36 -- host/digest.sh@56 -- # qd=16 00:22:36.024 22:42:36 -- host/digest.sh@58 -- # bperfpid=97642 00:22:36.024 22:42:36 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:36.024 22:42:36 -- host/digest.sh@60 -- # waitforlisten 97642 /var/tmp/bperf.sock 00:22:36.024 22:42:36 -- common/autotest_common.sh@829 -- # '[' -z 97642 ']' 00:22:36.024 22:42:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:36.024 22:42:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:36.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:36.024 22:42:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:36.024 22:42:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:36.024 22:42:36 -- common/autotest_common.sh@10 -- # set +x 00:22:36.024 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:36.024 Zero copy mechanism will not be used. 00:22:36.024 [2024-11-20 22:42:36.630138] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:36.024 [2024-11-20 22:42:36.630238] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97642 ] 00:22:36.282 [2024-11-20 22:42:36.760183] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.282 [2024-11-20 22:42:36.823130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.239 22:42:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:37.239 22:42:37 -- common/autotest_common.sh@862 -- # return 0 00:22:37.239 22:42:37 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:37.239 22:42:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:37.239 22:42:37 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:37.239 22:42:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.239 22:42:37 -- common/autotest_common.sh@10 -- # set +x 00:22:37.239 22:42:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.239 22:42:37 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:37.239 22:42:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:37.497 nvme0n1 00:22:37.497 22:42:38 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:37.497 22:42:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.497 22:42:38 -- common/autotest_common.sh@10 -- # set +x 00:22:37.497 22:42:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.497 22:42:38 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:37.497 22:42:38 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:37.497 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:37.498 Zero copy mechanism will not be used. 00:22:37.498 Running I/O for 2 seconds... 00:22:37.498 [2024-11-20 22:42:38.214950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.498 [2024-11-20 22:42:38.215002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.498 [2024-11-20 22:42:38.215022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.498 [2024-11-20 22:42:38.218972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.498 [2024-11-20 22:42:38.219009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.498 [2024-11-20 22:42:38.219031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.498 [2024-11-20 22:42:38.222908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.498 [2024-11-20 22:42:38.222944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.498 [2024-11-20 22:42:38.222965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.498 [2024-11-20 22:42:38.227088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.498 [2024-11-20 22:42:38.227124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.498 [2024-11-20 22:42:38.227147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.758 [2024-11-20 22:42:38.231645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.758 [2024-11-20 22:42:38.231684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.758 [2024-11-20 22:42:38.231696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.758 [2024-11-20 22:42:38.236035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.758 [2024-11-20 22:42:38.236071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.758 [2024-11-20 22:42:38.236094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.758 [2024-11-20 22:42:38.240670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.758 [2024-11-20 22:42:38.240708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.758 [2024-11-20 22:42:38.240730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.758 [2024-11-20 22:42:38.244876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.758 [2024-11-20 22:42:38.244912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.758 [2024-11-20 22:42:38.244936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.758 [2024-11-20 22:42:38.249478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.758 [2024-11-20 22:42:38.249515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.758 [2024-11-20 22:42:38.249539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.758 [2024-11-20 22:42:38.254032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.758 [2024-11-20 22:42:38.254082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.758 [2024-11-20 22:42:38.254102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.758 [2024-11-20 22:42:38.257712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.758 [2024-11-20 22:42:38.257746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.257769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.262405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.262450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.262473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.266351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.266400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.266420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.270728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.270764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.270785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.274836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.274874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.274887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.278860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.278896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.278919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.282756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.282794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.282807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.286360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.286408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.286428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.290388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.290424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.290437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.294786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.294834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.294854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.300347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.300384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.300396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.304602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.304638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.304660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.309023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.309059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.309082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.313165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.313200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.313222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.317690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.317728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.317749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.320519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.320555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.320581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.324763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.324799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.324821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.329254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.329302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.329321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.333458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.333495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.333521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.337604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.337641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.337654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.341258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.341304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.341324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.344760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.344797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.344819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.349321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.349357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.349381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.353519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.353558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.353579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.358055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.358092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.358104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.362005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.362041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.362054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.366169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.366218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-11-20 22:42:38.366236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.759 [2024-11-20 22:42:38.368777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.759 [2024-11-20 22:42:38.368811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.760 [2024-11-20 22:42:38.368834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.760 [2024-11-20 22:42:38.373593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.760 [2024-11-20 22:42:38.373631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.760 [2024-11-20 22:42:38.373655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.760 [2024-11-20 22:42:38.377119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.760 [2024-11-20 22:42:38.377157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.760 [2024-11-20 22:42:38.377178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.760 [2024-11-20 22:42:38.381461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.760 [2024-11-20 22:42:38.381499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.760 [2024-11-20 22:42:38.381524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.760 [2024-11-20 22:42:38.385295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.760 [2024-11-20 22:42:38.385330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.760 [2024-11-20 22:42:38.385353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.760 [2024-11-20 22:42:38.389460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.760 [2024-11-20 22:42:38.389497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.760 [2024-11-20 22:42:38.389523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.760 [2024-11-20 22:42:38.393954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.760 [2024-11-20 22:42:38.393992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.760 [2024-11-20 22:42:38.394012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.760 [2024-11-20 22:42:38.397931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.760 [2024-11-20 22:42:38.397980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.760 [2024-11-20 22:42:38.397999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.760 [2024-11-20 22:42:38.402539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.760 [2024-11-20 22:42:38.402588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.760 [2024-11-20 22:42:38.402608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.760 [2024-11-20 22:42:38.406636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.760 [2024-11-20 22:42:38.406671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.760 [2024-11-20 22:42:38.406684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.760 [2024-11-20 22:42:38.409567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.760 [2024-11-20 22:42:38.409602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.760 [2024-11-20 22:42:38.409626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.760 [2024-11-20 22:42:38.414029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.760 [2024-11-20 22:42:38.414066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.760 [2024-11-20 22:42:38.414079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.760 [2024-11-20 22:42:38.418511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.760 [2024-11-20 22:42:38.418560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.760 [2024-11-20 22:42:38.418580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.760 [2024-11-20 22:42:38.422671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.760 [2024-11-20 22:42:38.422708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.760 [2024-11-20 22:42:38.422729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.760 [2024-11-20 22:42:38.426344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.760 [2024-11-20 22:42:38.426392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.760 [2024-11-20 22:42:38.426411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.760 [2024-11-20 22:42:38.430160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.760 [2024-11-20 22:42:38.430209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.760 [2024-11-20 22:42:38.430228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.760 [2024-11-20 22:42:38.434118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.760 [2024-11-20 22:42:38.434169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.760 [2024-11-20 22:42:38.434189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.760 [2024-11-20 22:42:38.438634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.760 [2024-11-20 22:42:38.438672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.760 [2024-11-20 22:42:38.438697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.760 [2024-11-20 22:42:38.442254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.760 [2024-11-20 22:42:38.442304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.760 [2024-11-20 22:42:38.442323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.760 [2024-11-20 22:42:38.445856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.760 [2024-11-20 22:42:38.445892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.760 [2024-11-20 22:42:38.445912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.760 [2024-11-20 22:42:38.450334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.760 [2024-11-20 22:42:38.450378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.760 [2024-11-20 22:42:38.450401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.760 [2024-11-20 22:42:38.454590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.760 [2024-11-20 22:42:38.454637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.760 [2024-11-20 22:42:38.454660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.760 [2024-11-20 22:42:38.459071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.760 [2024-11-20 22:42:38.459108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.760 [2024-11-20 22:42:38.459129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.760 [2024-11-20 22:42:38.463089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.760 [2024-11-20 22:42:38.463125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.760 [2024-11-20 22:42:38.463148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.760 [2024-11-20 22:42:38.467329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.760 [2024-11-20 22:42:38.467364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.760 [2024-11-20 22:42:38.467389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.760 [2024-11-20 22:42:38.471447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.761 [2024-11-20 22:42:38.471483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.761 [2024-11-20 22:42:38.471506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.761 [2024-11-20 22:42:38.475563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.761 [2024-11-20 22:42:38.475600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.761 [2024-11-20 22:42:38.475622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.761 [2024-11-20 22:42:38.479313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.761 [2024-11-20 22:42:38.479361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.761 [2024-11-20 22:42:38.479373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.761 [2024-11-20 22:42:38.483203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.761 [2024-11-20 22:42:38.483239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.761 [2024-11-20 22:42:38.483261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.761 [2024-11-20 22:42:38.487776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:37.761 [2024-11-20 22:42:38.487812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.761 [2024-11-20 22:42:38.487836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.491799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.491836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.491857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.495870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.495907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.495920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.500718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.500755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.500774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.505629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.505666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.505686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.509200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.509237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.509250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.513628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.513673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.513693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.518355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.518392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.518412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.523500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.523537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.523549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.527950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.527987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.528009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.531295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.531331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.531350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.535889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.535927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.535947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.540853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.540904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.540923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.545070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.545107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.545128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.549706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.549743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.549755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.554555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.554593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.554615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.557250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.557294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.557313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.561059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.561109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.561130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.565880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.565919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.565933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.570181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.570229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.570242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.574384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.574420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.574433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.578529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.578565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.578585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.582710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.582748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.582761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.585995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.586030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.586050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.589177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.589215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.589236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.593471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.593508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.593533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.033 [2024-11-20 22:42:38.597119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.033 [2024-11-20 22:42:38.597155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.033 [2024-11-20 22:42:38.597176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.601308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.601343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.601367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.604881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.604918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.604941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.608500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.608538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.608562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.612556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.612594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.612617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.616750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.616788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.616813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.621472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.621509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.621533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.625360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.625397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.625418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.629023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.629060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.629081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.633300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.633335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.633355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.637253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.637313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.637332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.641214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.641264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.641291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.645190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.645240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.645260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.649755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.649800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.649820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.653881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.653930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.653950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.657892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.657928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.657947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.661592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.661627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.661648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.664693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.664729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.664752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.668936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.668972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.668993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.673360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.673395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.673415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.677399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.677436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.677457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.681185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.681221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.681244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.685404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.685442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.685466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.688991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.689027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.689041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.693435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.693471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.693493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.697471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.697508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.697531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.701618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.701654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.701676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.705028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.034 [2024-11-20 22:42:38.705066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.034 [2024-11-20 22:42:38.705086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.034 [2024-11-20 22:42:38.708952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.035 [2024-11-20 22:42:38.708988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.035 [2024-11-20 22:42:38.709008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.035 [2024-11-20 22:42:38.712609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.035 [2024-11-20 22:42:38.712646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.035 [2024-11-20 22:42:38.712671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.035 [2024-11-20 22:42:38.716653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.035 [2024-11-20 22:42:38.716690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.035 [2024-11-20 22:42:38.716711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.035 [2024-11-20 22:42:38.719741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.035 [2024-11-20 22:42:38.719777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.035 [2024-11-20 22:42:38.719800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.035 [2024-11-20 22:42:38.724250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.035 [2024-11-20 22:42:38.724304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.035 [2024-11-20 22:42:38.724324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.035 [2024-11-20 22:42:38.727701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.035 [2024-11-20 22:42:38.727738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.035 [2024-11-20 22:42:38.727761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.035 [2024-11-20 22:42:38.731342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.035 [2024-11-20 22:42:38.731378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.035 [2024-11-20 22:42:38.731401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.035 [2024-11-20 22:42:38.735048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.035 [2024-11-20 22:42:38.735084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.035 [2024-11-20 22:42:38.735107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.035 [2024-11-20 22:42:38.739434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.035 [2024-11-20 22:42:38.739470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.035 [2024-11-20 22:42:38.739489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.035 [2024-11-20 22:42:38.743322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.035 [2024-11-20 22:42:38.743354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.035 [2024-11-20 22:42:38.743373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.035 [2024-11-20 22:42:38.747019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.035 [2024-11-20 22:42:38.747056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.035 [2024-11-20 22:42:38.747077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.035 [2024-11-20 22:42:38.751644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.035 [2024-11-20 22:42:38.751687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.035 [2024-11-20 22:42:38.751705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.329 [2024-11-20 22:42:38.756565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.329 [2024-11-20 22:42:38.756614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.329 [2024-11-20 22:42:38.756633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.329 [2024-11-20 22:42:38.761701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.329 [2024-11-20 22:42:38.761737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.329 [2024-11-20 22:42:38.761750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.329 [2024-11-20 22:42:38.765559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.329 [2024-11-20 22:42:38.765595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.329 [2024-11-20 22:42:38.765608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.770320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.770369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.770392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.774897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.774934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.774947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.779361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.779409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.779430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.783946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.783985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.784010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.787982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.788019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.788042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.791912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.791950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.791971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.796043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.796080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.796103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.799692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.799730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.799753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.803734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.803771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.803794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.806931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.806967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.806990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.810882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.810920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.810941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.815805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.815842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.815864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.819119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.819155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.819176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.822822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.822858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.822880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.826868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.826916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.826935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.831066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.831105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.831127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.835388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.835426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.835447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.839440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.839473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.839494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.843922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.843959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.843980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.847892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.847929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.847952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.851853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.851891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.851913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.855997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.856034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.856055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.859998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.860035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.860056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.864411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.864448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.864473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.868457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.868494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.868519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.872243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.872293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.330 [2024-11-20 22:42:38.872308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.330 [2024-11-20 22:42:38.876154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.330 [2024-11-20 22:42:38.876191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.876204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.879762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.331 [2024-11-20 22:42:38.879800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.879824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.883762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.331 [2024-11-20 22:42:38.883799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.883820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.887829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.331 [2024-11-20 22:42:38.887866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.887888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.891419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.331 [2024-11-20 22:42:38.891468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.891491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.895432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.331 [2024-11-20 22:42:38.895496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.895510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.899769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.331 [2024-11-20 22:42:38.899805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.899827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.904104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.331 [2024-11-20 22:42:38.904141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.904165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.907950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.331 [2024-11-20 22:42:38.907987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.908010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.911664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.331 [2024-11-20 22:42:38.911701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.911722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.915981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.331 [2024-11-20 22:42:38.916017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.916040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.920224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.331 [2024-11-20 22:42:38.920262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.920286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.923921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.331 [2024-11-20 22:42:38.923957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.923978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.927573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.331 [2024-11-20 22:42:38.927609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.927631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.931979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.331 [2024-11-20 22:42:38.932016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.932038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.936491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.331 [2024-11-20 22:42:38.936528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.936541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.940215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.331 [2024-11-20 22:42:38.940252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.940284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.944404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.331 [2024-11-20 22:42:38.944439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.944452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.948675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.331 [2024-11-20 22:42:38.948711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.948732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.953094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.331 [2024-11-20 22:42:38.953130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.953152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.956778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.331 [2024-11-20 22:42:38.956813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.956837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.961265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.331 [2024-11-20 22:42:38.961314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.961336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.965749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.331 [2024-11-20 22:42:38.965792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.965808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.969684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.331 [2024-11-20 22:42:38.969721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.969742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.973254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.331 [2024-11-20 22:42:38.973304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.973324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.977064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.331 [2024-11-20 22:42:38.977112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.331 [2024-11-20 22:42:38.977131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.331 [2024-11-20 22:42:38.981004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.332 [2024-11-20 22:42:38.981040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.332 [2024-11-20 22:42:38.981063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.332 [2024-11-20 22:42:38.985094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.332 [2024-11-20 22:42:38.985129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.332 [2024-11-20 22:42:38.985153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.332 [2024-11-20 22:42:38.989006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.332 [2024-11-20 22:42:38.989044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.332 [2024-11-20 22:42:38.989067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.332 [2024-11-20 22:42:38.992728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.332 [2024-11-20 22:42:38.992764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.332 [2024-11-20 22:42:38.992785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.332 [2024-11-20 22:42:38.996584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.332 [2024-11-20 22:42:38.996631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.332 [2024-11-20 22:42:38.996650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.332 [2024-11-20 22:42:39.000534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.332 [2024-11-20 22:42:39.000572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.332 [2024-11-20 22:42:39.000585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.332 [2024-11-20 22:42:39.004411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.332 [2024-11-20 22:42:39.004448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.332 [2024-11-20 22:42:39.004461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.332 [2024-11-20 22:42:39.007586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.332 [2024-11-20 22:42:39.007622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.332 [2024-11-20 22:42:39.007647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.332 [2024-11-20 22:42:39.010998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.332 [2024-11-20 22:42:39.011034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.332 [2024-11-20 22:42:39.011047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.332 [2024-11-20 22:42:39.015493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.332 [2024-11-20 22:42:39.015530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.332 [2024-11-20 22:42:39.015554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.332 [2024-11-20 22:42:39.020185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.332 [2024-11-20 22:42:39.020222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.332 [2024-11-20 22:42:39.020242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.332 [2024-11-20 22:42:39.024042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.332 [2024-11-20 22:42:39.024078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.332 [2024-11-20 22:42:39.024099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.332 [2024-11-20 22:42:39.028252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.332 [2024-11-20 22:42:39.028313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.332 [2024-11-20 22:42:39.028326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.332 [2024-11-20 22:42:39.031885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.332 [2024-11-20 22:42:39.031921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.332 [2024-11-20 22:42:39.031944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.332 [2024-11-20 22:42:39.036389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.332 [2024-11-20 22:42:39.036437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.332 [2024-11-20 22:42:39.036457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.332 [2024-11-20 22:42:39.040615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.332 [2024-11-20 22:42:39.040663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.332 [2024-11-20 22:42:39.040683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.332 [2024-11-20 22:42:39.044449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.332 [2024-11-20 22:42:39.044484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.332 [2024-11-20 22:42:39.044497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.332 [2024-11-20 22:42:39.048823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.332 [2024-11-20 22:42:39.048871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.332 [2024-11-20 22:42:39.048891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.605 [2024-11-20 22:42:39.053432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.605 [2024-11-20 22:42:39.053479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.605 [2024-11-20 22:42:39.053501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.605 [2024-11-20 22:42:39.057474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.605 [2024-11-20 22:42:39.057523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.605 [2024-11-20 22:42:39.057544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.605 [2024-11-20 22:42:39.062433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.605 [2024-11-20 22:42:39.062483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.606 [2024-11-20 22:42:39.062496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.606 [2024-11-20 22:42:39.066370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.606 [2024-11-20 22:42:39.066418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.606 [2024-11-20 22:42:39.066430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.606 [2024-11-20 22:42:39.071029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.606 [2024-11-20 22:42:39.071065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.606 [2024-11-20 22:42:39.071078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.606 [2024-11-20 22:42:39.074378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.606 [2024-11-20 22:42:39.074416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.606 [2024-11-20 22:42:39.074438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.606 [2024-11-20 22:42:39.078521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.606 [2024-11-20 22:42:39.078557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.606 [2024-11-20 22:42:39.078570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.606 [2024-11-20 22:42:39.083003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.606 [2024-11-20 22:42:39.083040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.606 [2024-11-20 22:42:39.083052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.606 [2024-11-20 22:42:39.087081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.606 [2024-11-20 22:42:39.087118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.606 [2024-11-20 22:42:39.087139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.606 [2024-11-20 22:42:39.091133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.606 [2024-11-20 22:42:39.091169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.606 [2024-11-20 22:42:39.091192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.606 [2024-11-20 22:42:39.094982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.606 [2024-11-20 22:42:39.095019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.606 [2024-11-20 22:42:39.095041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.606 [2024-11-20 22:42:39.098594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.606 [2024-11-20 22:42:39.098631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.606 [2024-11-20 22:42:39.098650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.606 [2024-11-20 22:42:39.102801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.606 [2024-11-20 22:42:39.102838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.606 [2024-11-20 22:42:39.102859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.606 [2024-11-20 22:42:39.106510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.606 [2024-11-20 22:42:39.106549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.606 [2024-11-20 22:42:39.106574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.606 [2024-11-20 22:42:39.110377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.606 [2024-11-20 22:42:39.110415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.606 [2024-11-20 22:42:39.110440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.606 [2024-11-20 22:42:39.114591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.606 [2024-11-20 22:42:39.114628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.606 [2024-11-20 22:42:39.114649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.606 [2024-11-20 22:42:39.118880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.606 [2024-11-20 22:42:39.118917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.606 [2024-11-20 22:42:39.118938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.606 [2024-11-20 22:42:39.123173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.606 [2024-11-20 22:42:39.123210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.606 [2024-11-20 22:42:39.123233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.606 [2024-11-20 22:42:39.127237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.606 [2024-11-20 22:42:39.127299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.606 [2024-11-20 22:42:39.127314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.606 [2024-11-20 22:42:39.131801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.606 [2024-11-20 22:42:39.131837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.606 [2024-11-20 22:42:39.131859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.606 [2024-11-20 22:42:39.135835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.606 [2024-11-20 22:42:39.135870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.606 [2024-11-20 22:42:39.135894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.606 [2024-11-20 22:42:39.138722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.606 [2024-11-20 22:42:39.138759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.606 [2024-11-20 22:42:39.138781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.606 [2024-11-20 22:42:39.143535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.606 [2024-11-20 22:42:39.143572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.606 [2024-11-20 22:42:39.143598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.606 [2024-11-20 22:42:39.146700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.607 [2024-11-20 22:42:39.146736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.607 [2024-11-20 22:42:39.146758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.607 [2024-11-20 22:42:39.150914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.607 [2024-11-20 22:42:39.150951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.607 [2024-11-20 22:42:39.150973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.607 [2024-11-20 22:42:39.154746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.607 [2024-11-20 22:42:39.154782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.607 [2024-11-20 22:42:39.154803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.607 [2024-11-20 22:42:39.159027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.607 [2024-11-20 22:42:39.159064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.607 [2024-11-20 22:42:39.159087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.607 [2024-11-20 22:42:39.163094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.607 [2024-11-20 22:42:39.163130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.607 [2024-11-20 22:42:39.163152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.607 [2024-11-20 22:42:39.166700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.607 [2024-11-20 22:42:39.166737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.607 [2024-11-20 22:42:39.166759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.607 [2024-11-20 22:42:39.170380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.607 [2024-11-20 22:42:39.170417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.607 [2024-11-20 22:42:39.170441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.607 [2024-11-20 22:42:39.174124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.607 [2024-11-20 22:42:39.174161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.607 [2024-11-20 22:42:39.174173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.607 [2024-11-20 22:42:39.178436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.607 [2024-11-20 22:42:39.178473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.607 [2024-11-20 22:42:39.178498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.607 [2024-11-20 22:42:39.182714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.607 [2024-11-20 22:42:39.182750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.607 [2024-11-20 22:42:39.182771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.607 [2024-11-20 22:42:39.186545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.607 [2024-11-20 22:42:39.186582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.607 [2024-11-20 22:42:39.186606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.607 [2024-11-20 22:42:39.190002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.607 [2024-11-20 22:42:39.190039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.607 [2024-11-20 22:42:39.190051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.607 [2024-11-20 22:42:39.193749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.607 [2024-11-20 22:42:39.193793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.607 [2024-11-20 22:42:39.193807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.607 [2024-11-20 22:42:39.197605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.607 [2024-11-20 22:42:39.197654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.607 [2024-11-20 22:42:39.197675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.607 [2024-11-20 22:42:39.201667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.607 [2024-11-20 22:42:39.201703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.607 [2024-11-20 22:42:39.201724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.607 [2024-11-20 22:42:39.206028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.607 [2024-11-20 22:42:39.206065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.607 [2024-11-20 22:42:39.206077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.607 [2024-11-20 22:42:39.209830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.607 [2024-11-20 22:42:39.209865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.607 [2024-11-20 22:42:39.209878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.607 [2024-11-20 22:42:39.214341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.607 [2024-11-20 22:42:39.214377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.607 [2024-11-20 22:42:39.214401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.607 [2024-11-20 22:42:39.217861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.607 [2024-11-20 22:42:39.217911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.607 [2024-11-20 22:42:39.217932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.607 [2024-11-20 22:42:39.222180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.607 [2024-11-20 22:42:39.222216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.607 [2024-11-20 22:42:39.222233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.607 [2024-11-20 22:42:39.226091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.607 [2024-11-20 22:42:39.226128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.607 [2024-11-20 22:42:39.226144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.607 [2024-11-20 22:42:39.230485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.607 [2024-11-20 22:42:39.230522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.608 [2024-11-20 22:42:39.230534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.608 [2024-11-20 22:42:39.234206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.608 [2024-11-20 22:42:39.234243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.608 [2024-11-20 22:42:39.234255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.608 [2024-11-20 22:42:39.238499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.608 [2024-11-20 22:42:39.238535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.608 [2024-11-20 22:42:39.238557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.608 [2024-11-20 22:42:39.242987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.608 [2024-11-20 22:42:39.243023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.608 [2024-11-20 22:42:39.243044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.608 [2024-11-20 22:42:39.246861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.608 [2024-11-20 22:42:39.246898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.608 [2024-11-20 22:42:39.246923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.608 [2024-11-20 22:42:39.250669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.608 [2024-11-20 22:42:39.250706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.608 [2024-11-20 22:42:39.250728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.608 [2024-11-20 22:42:39.254101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.608 [2024-11-20 22:42:39.254150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.608 [2024-11-20 22:42:39.254162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.608 [2024-11-20 22:42:39.258105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.608 [2024-11-20 22:42:39.258150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.608 [2024-11-20 22:42:39.258167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.608 [2024-11-20 22:42:39.262100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.608 [2024-11-20 22:42:39.262138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.608 [2024-11-20 22:42:39.262151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.608 [2024-11-20 22:42:39.265885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.608 [2024-11-20 22:42:39.265923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.608 [2024-11-20 22:42:39.265942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.608 [2024-11-20 22:42:39.269721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.608 [2024-11-20 22:42:39.269769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.608 [2024-11-20 22:42:39.269798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.608 [2024-11-20 22:42:39.274052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.608 [2024-11-20 22:42:39.274089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.608 [2024-11-20 22:42:39.274102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.608 [2024-11-20 22:42:39.278466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.608 [2024-11-20 22:42:39.278515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.608 [2024-11-20 22:42:39.278535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.608 [2024-11-20 22:42:39.282088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.608 [2024-11-20 22:42:39.282132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.608 [2024-11-20 22:42:39.282148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.608 [2024-11-20 22:42:39.285488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.608 [2024-11-20 22:42:39.285523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.608 [2024-11-20 22:42:39.285547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.608 [2024-11-20 22:42:39.289396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.608 [2024-11-20 22:42:39.289434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.608 [2024-11-20 22:42:39.289455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.608 [2024-11-20 22:42:39.293603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.608 [2024-11-20 22:42:39.293640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.608 [2024-11-20 22:42:39.293660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.608 [2024-11-20 22:42:39.297702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.608 [2024-11-20 22:42:39.297739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.608 [2024-11-20 22:42:39.297760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.608 [2024-11-20 22:42:39.302017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.608 [2024-11-20 22:42:39.302054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.608 [2024-11-20 22:42:39.302067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.608 [2024-11-20 22:42:39.305442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.608 [2024-11-20 22:42:39.305477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.608 [2024-11-20 22:42:39.305503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.608 [2024-11-20 22:42:39.308980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.608 [2024-11-20 22:42:39.309018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.608 [2024-11-20 22:42:39.309042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.608 [2024-11-20 22:42:39.313552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.608 [2024-11-20 22:42:39.313589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.608 [2024-11-20 22:42:39.313611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.608 [2024-11-20 22:42:39.317768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.608 [2024-11-20 22:42:39.317816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.608 [2024-11-20 22:42:39.317838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.608 [2024-11-20 22:42:39.321953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.608 [2024-11-20 22:42:39.321990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.608 [2024-11-20 22:42:39.322010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.608 [2024-11-20 22:42:39.325854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.609 [2024-11-20 22:42:39.325903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.609 [2024-11-20 22:42:39.325923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.609 [2024-11-20 22:42:39.329856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.609 [2024-11-20 22:42:39.329892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.609 [2024-11-20 22:42:39.329912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.609 [2024-11-20 22:42:39.334206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.609 [2024-11-20 22:42:39.334243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.609 [2024-11-20 22:42:39.334255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.869 [2024-11-20 22:42:39.338256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.338305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.338325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.342307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.342344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.342358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.346480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.346517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.346539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.350139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.350188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.350208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.353983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.354020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.354040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.358077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.358114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.358132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.362004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.362041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.362062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.365516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.365553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.365576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.369539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.369576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.369594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.372884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.372920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.372940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.377186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.377222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.377245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.381257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.381303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.381323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.385665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.385715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.385734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.389264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.389311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.389334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.393063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.393099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.393122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.398016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.398053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.398073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.401704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.401741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.401764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.405612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.405648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.405669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.409689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.409726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.409746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.413445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.413481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.413506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.417843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.417881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.417901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.422225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.422262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.422294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.425994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.426045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.426064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.430619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.430670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.430688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.434508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.434557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.434576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.438399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.438443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.438467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.870 [2024-11-20 22:42:39.442097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.870 [2024-11-20 22:42:39.442135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.870 [2024-11-20 22:42:39.442149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.446239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.446286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.446303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.450159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.450208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.450227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.454520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.454557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.454569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.458512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.458559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.458584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.462345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.462394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.462416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.466089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.466133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.466150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.469748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.469784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.469808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.473531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.473568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.473589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.477439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.477476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.477496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.481380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.481417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.481430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.485662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.485699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.485720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.489249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.489296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.489316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.493517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.493554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.493567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.497299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.497345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.497358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.501569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.501604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.501627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.505892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.505930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.505943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.510246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.510294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.510308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.514680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.514717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.514740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.518977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.519015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.519036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.523112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.523150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.523173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.527574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.527623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.527642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.530772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.530808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.530831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.535600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.535637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.535650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.538975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.539012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.539035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.542094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.542141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.542159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.546259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.546308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.546327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.550117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.871 [2024-11-20 22:42:39.550158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.871 [2024-11-20 22:42:39.550175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.871 [2024-11-20 22:42:39.554840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.872 [2024-11-20 22:42:39.554876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.872 [2024-11-20 22:42:39.554889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.872 [2024-11-20 22:42:39.559015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.872 [2024-11-20 22:42:39.559065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.872 [2024-11-20 22:42:39.559084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.872 [2024-11-20 22:42:39.564233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.872 [2024-11-20 22:42:39.564292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.872 [2024-11-20 22:42:39.564312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.872 [2024-11-20 22:42:39.568887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.872 [2024-11-20 22:42:39.568924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.872 [2024-11-20 22:42:39.568944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.872 [2024-11-20 22:42:39.573308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.872 [2024-11-20 22:42:39.573367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.872 [2024-11-20 22:42:39.573380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.872 [2024-11-20 22:42:39.577766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.872 [2024-11-20 22:42:39.577820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.872 [2024-11-20 22:42:39.577837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.872 [2024-11-20 22:42:39.581727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.872 [2024-11-20 22:42:39.581762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.872 [2024-11-20 22:42:39.581783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.872 [2024-11-20 22:42:39.585516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.872 [2024-11-20 22:42:39.585552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.872 [2024-11-20 22:42:39.585573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.872 [2024-11-20 22:42:39.589458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.872 [2024-11-20 22:42:39.589495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.872 [2024-11-20 22:42:39.589508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.872 [2024-11-20 22:42:39.593697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.872 [2024-11-20 22:42:39.593733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.872 [2024-11-20 22:42:39.593746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.872 [2024-11-20 22:42:39.597412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:38.872 [2024-11-20 22:42:39.597448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.872 [2024-11-20 22:42:39.597461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.133 [2024-11-20 22:42:39.600817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.133 [2024-11-20 22:42:39.600866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.133 [2024-11-20 22:42:39.600885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.133 [2024-11-20 22:42:39.604955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.133 [2024-11-20 22:42:39.605005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.133 [2024-11-20 22:42:39.605025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.133 [2024-11-20 22:42:39.609352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.133 [2024-11-20 22:42:39.609388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.133 [2024-11-20 22:42:39.609411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.133 [2024-11-20 22:42:39.613084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.133 [2024-11-20 22:42:39.613121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.133 [2024-11-20 22:42:39.613142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.133 [2024-11-20 22:42:39.616890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.133 [2024-11-20 22:42:39.616926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.133 [2024-11-20 22:42:39.616949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.133 [2024-11-20 22:42:39.620957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.133 [2024-11-20 22:42:39.620994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.133 [2024-11-20 22:42:39.621017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.133 [2024-11-20 22:42:39.624808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.133 [2024-11-20 22:42:39.624843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.624867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.629513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.629549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.629562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.634066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.634104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.634124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.638513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.638561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.638581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.642000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.642038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.642050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.645977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.646015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.646035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.650367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.650400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.650419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.654408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.654459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.654479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.658415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.658452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.658465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.661776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.661828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.661841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.665860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.665898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.665918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.670058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.670107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.670120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.674356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.674405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.674426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.678333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.678382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.678403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.681359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.681391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.681410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.685353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.685388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.685401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.690230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.690299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.690314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.694771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.694807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.694830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.699581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.699624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.699641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.703048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.703085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.703108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.706873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.706909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.706930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.711212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.711261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.711297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.715575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.715624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.715644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.719897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.719934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.719955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.723145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.723182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.723204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.727443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.727492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.727507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.731162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.731198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.134 [2024-11-20 22:42:39.731219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.134 [2024-11-20 22:42:39.735736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.134 [2024-11-20 22:42:39.735780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.735792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.739676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.739712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.739734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.743878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.743914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.743935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.748386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.748430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.748442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.752652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.752689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.752701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.756438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.756474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.756499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.760733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.760774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.760787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.765238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.765298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.765322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.769641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.769688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.769702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.774723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.774758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.774779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.778586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.778633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.778652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.782304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.782351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.782371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.786132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.786181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.786194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.790594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.790643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.790662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.794601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.794648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.794668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.798395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.798430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.798449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.802724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.802762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.802775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.806621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.806657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.806669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.810124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.810162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.810182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.814119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.814156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.814168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.818516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.818553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.818565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.822456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.822493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.822506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.826136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.826172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.826185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.830668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.830703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.830715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.834998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.835036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.835057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.838432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.838481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.838502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.842154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.842202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.842215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.135 [2024-11-20 22:42:39.846032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.135 [2024-11-20 22:42:39.846074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.135 [2024-11-20 22:42:39.846087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.136 [2024-11-20 22:42:39.850364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.136 [2024-11-20 22:42:39.850399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.136 [2024-11-20 22:42:39.850422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.136 [2024-11-20 22:42:39.854545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.136 [2024-11-20 22:42:39.854582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.136 [2024-11-20 22:42:39.854595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.136 [2024-11-20 22:42:39.858987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.136 [2024-11-20 22:42:39.859035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.136 [2024-11-20 22:42:39.859048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.396 [2024-11-20 22:42:39.863056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.396 [2024-11-20 22:42:39.863103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.396 [2024-11-20 22:42:39.863121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.396 [2024-11-20 22:42:39.867073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.396 [2024-11-20 22:42:39.867110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.396 [2024-11-20 22:42:39.867133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.396 [2024-11-20 22:42:39.870829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.396 [2024-11-20 22:42:39.870865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.396 [2024-11-20 22:42:39.870887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.396 [2024-11-20 22:42:39.874593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.396 [2024-11-20 22:42:39.874630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.396 [2024-11-20 22:42:39.874654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.396 [2024-11-20 22:42:39.877959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.396 [2024-11-20 22:42:39.877996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.878016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.881713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.881749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.881773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.886234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.886270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.886307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.889951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.889987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.890008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.894612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.894648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.894671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.898993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.899029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.899053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.903400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.903435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.903456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.907583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.907619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.907641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.912244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.912299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.912314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.916517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.916553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.916567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.920435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.920483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.920504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.923779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.923816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.923839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.927408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.927443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.927468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.931077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.931112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.931134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.935343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.935389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.935416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.939620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.939657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.939679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.943476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.943512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.943537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.947380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.947416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.947438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.951216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.951253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.951288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.954400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.954436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.954449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.958902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.958937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.958958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.962912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.962946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.962967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.966891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.966927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.966950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.971115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.971150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.971171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.975530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.975579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.975602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.979257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.979304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.979324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.982889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.982926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.397 [2024-11-20 22:42:39.982948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.397 [2024-11-20 22:42:39.986962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.397 [2024-11-20 22:42:39.986999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:39.987023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:39.990542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:39.990579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:39.990602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:39.994404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:39.994441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:39.994454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:39.998906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:39.998942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:39.998955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:40.003280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:40.003334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:40.003353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:40.007394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:40.007433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:40.007446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:40.011807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:40.011856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:40.011875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:40.015788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:40.015826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:40.015846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:40.020227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:40.020266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:40.020292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:40.026040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:40.026081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:40.026096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:40.031267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:40.031325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:40.031351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:40.035788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:40.035837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:40.035857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:40.039716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:40.039753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:40.039774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:40.043774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:40.043823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:40.043843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:40.048801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:40.048863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:40.048884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:40.054603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:40.054643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:40.054666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:40.059069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:40.059107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:40.059128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:40.062648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:40.062684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:40.062709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:40.065974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:40.066013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:40.066033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:40.069744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:40.069779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:40.069851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:40.073717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:40.073753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:40.073773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:40.077745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:40.077782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:40.077853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:40.082234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:40.082308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:40.082328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:40.086708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:40.086745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:40.086768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:40.090883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:40.090919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:40.090941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:40.095362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:40.095398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:40.095423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:40.099259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:40.099305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.398 [2024-11-20 22:42:40.099325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.398 [2024-11-20 22:42:40.102672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.398 [2024-11-20 22:42:40.102708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.399 [2024-11-20 22:42:40.102731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.399 [2024-11-20 22:42:40.107152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.399 [2024-11-20 22:42:40.107189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.399 [2024-11-20 22:42:40.107210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.399 [2024-11-20 22:42:40.111528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.399 [2024-11-20 22:42:40.111564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.399 [2024-11-20 22:42:40.111585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.399 [2024-11-20 22:42:40.115977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.399 [2024-11-20 22:42:40.116014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.399 [2024-11-20 22:42:40.116037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.399 [2024-11-20 22:42:40.119873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.399 [2024-11-20 22:42:40.119910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.399 [2024-11-20 22:42:40.119932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.399 [2024-11-20 22:42:40.124345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.399 [2024-11-20 22:42:40.124380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.399 [2024-11-20 22:42:40.124392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.658 [2024-11-20 22:42:40.128342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.658 [2024-11-20 22:42:40.128377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.658 [2024-11-20 22:42:40.128401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.658 [2024-11-20 22:42:40.132423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.658 [2024-11-20 22:42:40.132459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.658 [2024-11-20 22:42:40.132472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.658 [2024-11-20 22:42:40.136564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.658 [2024-11-20 22:42:40.136602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.658 [2024-11-20 22:42:40.136623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.658 [2024-11-20 22:42:40.140673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.658 [2024-11-20 22:42:40.140710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.658 [2024-11-20 22:42:40.140731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.658 [2024-11-20 22:42:40.144215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.658 [2024-11-20 22:42:40.144250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.658 [2024-11-20 22:42:40.144286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.658 [2024-11-20 22:42:40.148212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.658 [2024-11-20 22:42:40.148249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.658 [2024-11-20 22:42:40.148272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.658 [2024-11-20 22:42:40.152521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.658 [2024-11-20 22:42:40.152558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.658 [2024-11-20 22:42:40.152582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.658 [2024-11-20 22:42:40.155913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.658 [2024-11-20 22:42:40.155950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.658 [2024-11-20 22:42:40.155974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.658 [2024-11-20 22:42:40.159841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.658 [2024-11-20 22:42:40.159878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.658 [2024-11-20 22:42:40.159902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.658 [2024-11-20 22:42:40.163021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.658 [2024-11-20 22:42:40.163058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.658 [2024-11-20 22:42:40.163079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.658 [2024-11-20 22:42:40.167545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.658 [2024-11-20 22:42:40.167591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.658 [2024-11-20 22:42:40.167612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.658 [2024-11-20 22:42:40.170679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.658 [2024-11-20 22:42:40.170716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.658 [2024-11-20 22:42:40.170739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.658 [2024-11-20 22:42:40.174459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.658 [2024-11-20 22:42:40.174495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.658 [2024-11-20 22:42:40.174515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.658 [2024-11-20 22:42:40.178763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.658 [2024-11-20 22:42:40.178801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.658 [2024-11-20 22:42:40.178824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.658 [2024-11-20 22:42:40.182986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.658 [2024-11-20 22:42:40.183022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.658 [2024-11-20 22:42:40.183045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.658 [2024-11-20 22:42:40.186729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.658 [2024-11-20 22:42:40.186766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.658 [2024-11-20 22:42:40.186788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.658 [2024-11-20 22:42:40.190735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.658 [2024-11-20 22:42:40.190773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.658 [2024-11-20 22:42:40.190796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.658 [2024-11-20 22:42:40.194629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.658 [2024-11-20 22:42:40.194667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.658 [2024-11-20 22:42:40.194691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.658 [2024-11-20 22:42:40.198913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.658 [2024-11-20 22:42:40.198949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.659 [2024-11-20 22:42:40.198972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.659 [2024-11-20 22:42:40.202511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a354a0) 00:22:39.659 [2024-11-20 22:42:40.202547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.659 [2024-11-20 22:42:40.202570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.659 00:22:39.659 Latency(us) 00:22:39.659 [2024-11-20T22:42:40.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.659 [2024-11-20T22:42:40.393Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:39.659 nvme0n1 : 2.04 7438.15 929.77 0.00 0.00 2107.95 521.31 42657.98 00:22:39.659 [2024-11-20T22:42:40.393Z] =================================================================================================================== 00:22:39.659 [2024-11-20T22:42:40.393Z] Total : 7438.15 929.77 0.00 0.00 2107.95 521.31 42657.98 00:22:39.659 0 00:22:39.659 22:42:40 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:39.659 22:42:40 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:39.659 22:42:40 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:39.659 | .driver_specific 00:22:39.659 | .nvme_error 00:22:39.659 | .status_code 00:22:39.659 | .command_transient_transport_error' 00:22:39.659 22:42:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:39.917 22:42:40 -- host/digest.sh@71 -- # (( 489 > 0 )) 00:22:39.917 22:42:40 -- host/digest.sh@73 -- # killprocess 97642 00:22:39.917 22:42:40 -- common/autotest_common.sh@936 -- # '[' -z 97642 ']' 00:22:39.917 22:42:40 -- common/autotest_common.sh@940 -- # kill -0 97642 00:22:39.917 22:42:40 -- common/autotest_common.sh@941 -- # uname 00:22:39.917 22:42:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:39.917 22:42:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97642 00:22:39.917 22:42:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:39.917 killing process with pid 97642 00:22:39.917 Received shutdown signal, test time was about 2.000000 seconds 00:22:39.917 00:22:39.917 Latency(us) 00:22:39.917 [2024-11-20T22:42:40.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.917 [2024-11-20T22:42:40.651Z] =================================================================================================================== 00:22:39.917 [2024-11-20T22:42:40.651Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:39.917 22:42:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:39.917 22:42:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97642' 00:22:39.918 22:42:40 -- common/autotest_common.sh@955 -- # kill 97642 00:22:39.918 22:42:40 -- common/autotest_common.sh@960 -- # wait 97642 00:22:40.176 22:42:40 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:22:40.176 22:42:40 -- host/digest.sh@54 -- # local rw bs qd 00:22:40.176 22:42:40 -- host/digest.sh@56 -- # rw=randwrite 00:22:40.176 22:42:40 -- host/digest.sh@56 -- # bs=4096 00:22:40.176 22:42:40 -- host/digest.sh@56 -- # qd=128 00:22:40.176 22:42:40 -- host/digest.sh@58 -- # bperfpid=97727 00:22:40.176 22:42:40 -- host/digest.sh@60 -- # waitforlisten 97727 /var/tmp/bperf.sock 00:22:40.176 22:42:40 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:40.176 22:42:40 -- common/autotest_common.sh@829 -- # '[' -z 97727 ']' 00:22:40.176 22:42:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:40.176 22:42:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:40.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:40.176 22:42:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:40.176 22:42:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:40.176 22:42:40 -- common/autotest_common.sh@10 -- # set +x 00:22:40.176 [2024-11-20 22:42:40.863540] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:40.176 [2024-11-20 22:42:40.863657] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97727 ] 00:22:40.435 [2024-11-20 22:42:40.999397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.435 [2024-11-20 22:42:41.062493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.372 22:42:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:41.372 22:42:41 -- common/autotest_common.sh@862 -- # return 0 00:22:41.372 22:42:41 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:41.372 22:42:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:41.372 22:42:42 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:41.372 22:42:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.372 22:42:42 -- common/autotest_common.sh@10 -- # set +x 00:22:41.372 22:42:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.372 22:42:42 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:41.372 22:42:42 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:41.941 nvme0n1 00:22:41.941 22:42:42 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:41.941 22:42:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.941 22:42:42 -- common/autotest_common.sh@10 -- # set +x 00:22:41.941 22:42:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.941 22:42:42 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:41.941 22:42:42 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:41.941 Running I/O for 2 seconds... 00:22:41.941 [2024-11-20 22:42:42.492879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f6890 00:22:41.941 [2024-11-20 22:42:42.493299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.941 [2024-11-20 22:42:42.493339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:41.941 [2024-11-20 22:42:42.503945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fd640 00:22:41.941 [2024-11-20 22:42:42.504805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.941 [2024-11-20 22:42:42.504864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:41.941 [2024-11-20 22:42:42.512494] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e5658 00:22:41.941 [2024-11-20 22:42:42.512914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.941 [2024-11-20 22:42:42.512945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:41.941 [2024-11-20 22:42:42.520703] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f1ca0 00:22:41.941 [2024-11-20 22:42:42.520837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.941 [2024-11-20 22:42:42.520859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:41.941 [2024-11-20 22:42:42.530497] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fd208 00:22:41.941 [2024-11-20 22:42:42.531509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.941 [2024-11-20 22:42:42.531552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:41.941 [2024-11-20 22:42:42.540647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f5be8 00:22:41.941 [2024-11-20 22:42:42.541766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.941 [2024-11-20 22:42:42.541829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:41.941 [2024-11-20 22:42:42.550044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e5220 00:22:41.941 [2024-11-20 22:42:42.550520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.941 [2024-11-20 22:42:42.550550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:41.941 [2024-11-20 22:42:42.559448] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fa7d8 00:22:41.941 [2024-11-20 22:42:42.560261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.941 [2024-11-20 22:42:42.560310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:41.941 [2024-11-20 22:42:42.568709] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ed0b0 00:22:41.941 [2024-11-20 22:42:42.569236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.941 [2024-11-20 22:42:42.569266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:41.941 [2024-11-20 22:42:42.577957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f3a28 00:22:41.941 [2024-11-20 22:42:42.578528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.941 [2024-11-20 22:42:42.578558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:41.941 [2024-11-20 22:42:42.587222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fef90 00:22:41.941 [2024-11-20 22:42:42.587731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.941 [2024-11-20 22:42:42.587763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:41.941 [2024-11-20 22:42:42.596434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e6300 00:22:41.941 [2024-11-20 22:42:42.596882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.941 [2024-11-20 22:42:42.596913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.941 [2024-11-20 22:42:42.605578] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ef270 00:22:41.941 [2024-11-20 22:42:42.606043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.941 [2024-11-20 22:42:42.606073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:41.941 [2024-11-20 22:42:42.614883] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fe720 00:22:41.941 [2024-11-20 22:42:42.615261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.941 [2024-11-20 22:42:42.615311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:41.941 [2024-11-20 22:42:42.626625] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ee190 00:22:41.941 [2024-11-20 22:42:42.627861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.941 [2024-11-20 22:42:42.627894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.941 [2024-11-20 22:42:42.633507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e4578 00:22:41.941 [2024-11-20 22:42:42.633927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.941 [2024-11-20 22:42:42.633956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:41.941 [2024-11-20 22:42:42.643374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ef270 00:22:41.941 [2024-11-20 22:42:42.644666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.941 [2024-11-20 22:42:42.644701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.941 [2024-11-20 22:42:42.652571] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190eaef0 00:22:41.941 [2024-11-20 22:42:42.652936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.941 [2024-11-20 22:42:42.652965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:41.941 [2024-11-20 22:42:42.661720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e49b0 00:22:41.941 [2024-11-20 22:42:42.662131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.941 [2024-11-20 22:42:42.662162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:41.941 [2024-11-20 22:42:42.671597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ed920 00:22:42.201 [2024-11-20 22:42:42.672020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.201 [2024-11-20 22:42:42.672048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:42.201 [2024-11-20 22:42:42.681094] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e99d8 00:22:42.201 [2024-11-20 22:42:42.682076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.201 [2024-11-20 22:42:42.682122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:42.201 [2024-11-20 22:42:42.691326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f5378 00:22:42.201 [2024-11-20 22:42:42.692768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.201 [2024-11-20 22:42:42.692802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:42.201 [2024-11-20 22:42:42.701361] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fa3a0 00:22:42.201 [2024-11-20 22:42:42.702771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.201 [2024-11-20 22:42:42.702806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:42.201 [2024-11-20 22:42:42.710535] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190de8a8 00:22:42.201 [2024-11-20 22:42:42.711033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.201 [2024-11-20 22:42:42.711062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.201 [2024-11-20 22:42:42.719770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fef90 00:22:42.201 [2024-11-20 22:42:42.720248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.201 [2024-11-20 22:42:42.720289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.201 [2024-11-20 22:42:42.727824] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f7970 00:22:42.201 [2024-11-20 22:42:42.728751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.201 [2024-11-20 22:42:42.728790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:42.201 [2024-11-20 22:42:42.738949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e8088 00:22:42.201 [2024-11-20 22:42:42.739712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.201 [2024-11-20 22:42:42.739748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.201 [2024-11-20 22:42:42.748357] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f31b8 00:22:42.201 [2024-11-20 22:42:42.749118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.201 [2024-11-20 22:42:42.749153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:42.201 [2024-11-20 22:42:42.757691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f8a50 00:22:42.201 [2024-11-20 22:42:42.758527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.201 [2024-11-20 22:42:42.758562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:42.201 [2024-11-20 22:42:42.767243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e3060 00:22:42.201 [2024-11-20 22:42:42.767904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.201 [2024-11-20 22:42:42.767938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:42.201 [2024-11-20 22:42:42.776523] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f96f8 00:22:42.201 [2024-11-20 22:42:42.777238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.201 [2024-11-20 22:42:42.777272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:42.201 [2024-11-20 22:42:42.785735] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fb048 00:22:42.202 [2024-11-20 22:42:42.786537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.202 [2024-11-20 22:42:42.786571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:42.202 [2024-11-20 22:42:42.794986] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ed0b0 00:22:42.202 [2024-11-20 22:42:42.795713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.202 [2024-11-20 22:42:42.795748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:42.202 [2024-11-20 22:42:42.804400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ef6a8 00:22:42.202 [2024-11-20 22:42:42.805135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.202 [2024-11-20 22:42:42.805169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:42.202 [2024-11-20 22:42:42.813775] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fa3a0 00:22:42.202 [2024-11-20 22:42:42.814559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.202 [2024-11-20 22:42:42.814593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:42.202 [2024-11-20 22:42:42.823136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e0630 00:22:42.202 [2024-11-20 22:42:42.823895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.202 [2024-11-20 22:42:42.823930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:42.202 [2024-11-20 22:42:42.832950] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ea248 00:22:42.202 [2024-11-20 22:42:42.833684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.202 [2024-11-20 22:42:42.833718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:42.202 [2024-11-20 22:42:42.841096] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e5a90 00:22:42.202 [2024-11-20 22:42:42.841362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.202 [2024-11-20 22:42:42.841416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:42.202 [2024-11-20 22:42:42.852691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f8a50 00:22:42.202 [2024-11-20 22:42:42.853590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.202 [2024-11-20 22:42:42.853624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:42.202 [2024-11-20 22:42:42.861489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e6fa8 00:22:42.202 [2024-11-20 22:42:42.862829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.202 [2024-11-20 22:42:42.862873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:42.202 [2024-11-20 22:42:42.871131] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e3d08 00:22:42.202 [2024-11-20 22:42:42.871825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.202 [2024-11-20 22:42:42.871858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:42.202 [2024-11-20 22:42:42.880964] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f6cc8 00:22:42.202 [2024-11-20 22:42:42.882346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.202 [2024-11-20 22:42:42.882380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:42.202 [2024-11-20 22:42:42.891314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e6fa8 00:22:42.202 [2024-11-20 22:42:42.892376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.202 [2024-11-20 22:42:42.892419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:42.202 [2024-11-20 22:42:42.899711] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e4578 00:22:42.202 [2024-11-20 22:42:42.900628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.202 [2024-11-20 22:42:42.900663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.202 [2024-11-20 22:42:42.909608] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fb8b8 00:22:42.202 [2024-11-20 22:42:42.910367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.202 [2024-11-20 22:42:42.910401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:42.202 [2024-11-20 22:42:42.920550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e3498 00:22:42.202 [2024-11-20 22:42:42.921237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.202 [2024-11-20 22:42:42.921288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:42.461 [2024-11-20 22:42:42.932735] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fbcf0 00:22:42.462 [2024-11-20 22:42:42.933965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.462 [2024-11-20 22:42:42.934001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:42.462 [2024-11-20 22:42:42.941340] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ebfd0 00:22:42.462 [2024-11-20 22:42:42.942673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.462 [2024-11-20 22:42:42.942719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:42.462 [2024-11-20 22:42:42.951138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190dfdc0 00:22:42.462 [2024-11-20 22:42:42.951667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.462 [2024-11-20 22:42:42.951697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:42.462 [2024-11-20 22:42:42.960408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f20d8 00:22:42.462 [2024-11-20 22:42:42.961072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.462 [2024-11-20 22:42:42.961106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:42.462 [2024-11-20 22:42:42.969677] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e5658 00:22:42.462 [2024-11-20 22:42:42.970249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.462 [2024-11-20 22:42:42.970296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:42.462 [2024-11-20 22:42:42.979086] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190eff18 00:22:42.462 [2024-11-20 22:42:42.979599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.462 [2024-11-20 22:42:42.979628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:42.462 [2024-11-20 22:42:42.988250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f4b08 00:22:42.462 [2024-11-20 22:42:42.988839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.462 [2024-11-20 22:42:42.988879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:42.462 [2024-11-20 22:42:42.996454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f6890 00:22:42.462 [2024-11-20 22:42:42.996540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.462 [2024-11-20 22:42:42.996561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:42.462 [2024-11-20 22:42:43.007633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e84c0 00:22:42.462 [2024-11-20 22:42:43.008581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.462 [2024-11-20 22:42:43.008615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.462 [2024-11-20 22:42:43.016897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f6458 00:22:42.462 [2024-11-20 22:42:43.017930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.462 [2024-11-20 22:42:43.017974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.462 [2024-11-20 22:42:43.026266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e5a90 00:22:42.462 [2024-11-20 22:42:43.027290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.462 [2024-11-20 22:42:43.027322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.462 [2024-11-20 22:42:43.036784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f7538 00:22:42.462 [2024-11-20 22:42:43.037920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.462 [2024-11-20 22:42:43.037965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:42.462 [2024-11-20 22:42:43.043720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e0ea0 00:22:42.462 [2024-11-20 22:42:43.043831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.462 [2024-11-20 22:42:43.043852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:42.462 [2024-11-20 22:42:43.052985] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e3498 00:22:42.462 [2024-11-20 22:42:43.053232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.462 [2024-11-20 22:42:43.053264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:42.462 [2024-11-20 22:42:43.063651] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e73e0 00:22:42.462 [2024-11-20 22:42:43.065215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.462 [2024-11-20 22:42:43.065251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.462 [2024-11-20 22:42:43.073090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e2c28 00:22:42.462 [2024-11-20 22:42:43.074231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.462 [2024-11-20 22:42:43.074291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.462 [2024-11-20 22:42:43.081536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ee190 00:22:42.462 [2024-11-20 22:42:43.081955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.462 [2024-11-20 22:42:43.081985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.462 [2024-11-20 22:42:43.091246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e6300 00:22:42.462 [2024-11-20 22:42:43.092333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.462 [2024-11-20 22:42:43.092367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.462 [2024-11-20 22:42:43.101494] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e84c0 00:22:42.462 [2024-11-20 22:42:43.102412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.462 [2024-11-20 22:42:43.102455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:42.462 [2024-11-20 22:42:43.110388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ec408 00:22:42.462 [2024-11-20 22:42:43.111611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.462 [2024-11-20 22:42:43.111657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:42.462 [2024-11-20 22:42:43.119931] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e4de8 00:22:42.462 [2024-11-20 22:42:43.120501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.462 [2024-11-20 22:42:43.120530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:42.462 [2024-11-20 22:42:43.129458] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fbcf0 00:22:42.462 [2024-11-20 22:42:43.130267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.462 [2024-11-20 22:42:43.130323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:42.462 [2024-11-20 22:42:43.139547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f1868 00:22:42.462 [2024-11-20 22:42:43.140434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.462 [2024-11-20 22:42:43.140467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:42.462 [2024-11-20 22:42:43.147876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f1430 00:22:42.462 [2024-11-20 22:42:43.148520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.462 [2024-11-20 22:42:43.148549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:42.462 [2024-11-20 22:42:43.157080] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e6fa8 00:22:42.462 [2024-11-20 22:42:43.157474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.462 [2024-11-20 22:42:43.157505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:42.462 [2024-11-20 22:42:43.168502] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e4140 00:22:42.462 [2024-11-20 22:42:43.170062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.463 [2024-11-20 22:42:43.170104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:42.463 [2024-11-20 22:42:43.179073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f7970 00:22:42.463 [2024-11-20 22:42:43.179975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.463 [2024-11-20 22:42:43.180007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:42.463 [2024-11-20 22:42:43.189001] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fd640 00:22:42.463 [2024-11-20 22:42:43.190728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.463 [2024-11-20 22:42:43.190773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:42.722 [2024-11-20 22:42:43.200531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ef6a8 00:22:42.722 [2024-11-20 22:42:43.201610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.722 [2024-11-20 22:42:43.201643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:42.722 [2024-11-20 22:42:43.207838] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e23b8 00:22:42.722 [2024-11-20 22:42:43.208047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.722 [2024-11-20 22:42:43.208068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:42.722 [2024-11-20 22:42:43.219607] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e6738 00:22:42.722 [2024-11-20 22:42:43.220498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.722 [2024-11-20 22:42:43.220531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:42.722 [2024-11-20 22:42:43.228007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e88f8 00:22:42.722 [2024-11-20 22:42:43.228431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.722 [2024-11-20 22:42:43.228461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:42.722 [2024-11-20 22:42:43.236519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fc128 00:22:42.722 [2024-11-20 22:42:43.236637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.722 [2024-11-20 22:42:43.236658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:42.722 [2024-11-20 22:42:43.246937] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e95a0 00:22:42.722 [2024-11-20 22:42:43.247203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.722 [2024-11-20 22:42:43.247244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:42.722 [2024-11-20 22:42:43.257883] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190eaef0 00:22:42.722 [2024-11-20 22:42:43.259576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.722 [2024-11-20 22:42:43.259611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:42.722 [2024-11-20 22:42:43.266364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f5be8 00:22:42.722 [2024-11-20 22:42:43.267481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.722 [2024-11-20 22:42:43.267514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:42.722 [2024-11-20 22:42:43.275887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fe720 00:22:42.722 [2024-11-20 22:42:43.276211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.722 [2024-11-20 22:42:43.276240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:42.722 [2024-11-20 22:42:43.285646] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190df550 00:22:42.722 [2024-11-20 22:42:43.286747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.722 [2024-11-20 22:42:43.286781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:42.722 [2024-11-20 22:42:43.294535] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e1f80 00:22:42.722 [2024-11-20 22:42:43.294883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.722 [2024-11-20 22:42:43.294913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:42.722 [2024-11-20 22:42:43.305097] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ed920 00:22:42.722 [2024-11-20 22:42:43.305594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.722 [2024-11-20 22:42:43.305622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:42.723 [2024-11-20 22:42:43.316212] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f3a28 00:22:42.723 [2024-11-20 22:42:43.317184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.723 [2024-11-20 22:42:43.317217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:42.723 [2024-11-20 22:42:43.324724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f57b0 00:22:42.723 [2024-11-20 22:42:43.325239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.723 [2024-11-20 22:42:43.325270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:42.723 [2024-11-20 22:42:43.332936] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f6020 00:22:42.723 [2024-11-20 22:42:43.333129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.723 [2024-11-20 22:42:43.333150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:42.723 [2024-11-20 22:42:43.342714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190eea00 00:22:42.723 [2024-11-20 22:42:43.343793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.723 [2024-11-20 22:42:43.343827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:42.723 [2024-11-20 22:42:43.352130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e8088 00:22:42.723 [2024-11-20 22:42:43.352342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.723 [2024-11-20 22:42:43.352363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:42.723 [2024-11-20 22:42:43.361561] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190edd58 00:22:42.723 [2024-11-20 22:42:43.362576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.723 [2024-11-20 22:42:43.362611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.723 [2024-11-20 22:42:43.370956] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e0a68 00:22:42.723 [2024-11-20 22:42:43.371699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.723 [2024-11-20 22:42:43.371735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.723 [2024-11-20 22:42:43.379497] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190de8a8 00:22:42.723 [2024-11-20 22:42:43.379583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.723 [2024-11-20 22:42:43.379605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:42.723 [2024-11-20 22:42:43.390743] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190eff18 00:22:42.723 [2024-11-20 22:42:43.391300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.723 [2024-11-20 22:42:43.391329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:42.723 [2024-11-20 22:42:43.400112] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190de470 00:22:42.723 [2024-11-20 22:42:43.401416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.723 [2024-11-20 22:42:43.401450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:42.723 [2024-11-20 22:42:43.409563] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fa7d8 00:22:42.723 [2024-11-20 22:42:43.410176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.723 [2024-11-20 22:42:43.410204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:42.723 [2024-11-20 22:42:43.418754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190de470 00:22:42.723 [2024-11-20 22:42:43.419267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.723 [2024-11-20 22:42:43.419306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.723 [2024-11-20 22:42:43.428028] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fe2e8 00:22:42.723 [2024-11-20 22:42:43.428541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.723 [2024-11-20 22:42:43.428571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.723 [2024-11-20 22:42:43.437221] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190eaab8 00:22:42.723 [2024-11-20 22:42:43.437900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.723 [2024-11-20 22:42:43.437935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:42.723 [2024-11-20 22:42:43.445381] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190eea00 00:22:42.723 [2024-11-20 22:42:43.445597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.723 [2024-11-20 22:42:43.445618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:42.983 [2024-11-20 22:42:43.456072] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f3a28 00:22:42.983 [2024-11-20 22:42:43.456949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.983 [2024-11-20 22:42:43.456985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:42.983 [2024-11-20 22:42:43.466912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f3a28 00:22:42.983 [2024-11-20 22:42:43.468405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.983 [2024-11-20 22:42:43.468439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:42.983 [2024-11-20 22:42:43.476183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fc128 00:22:42.983 [2024-11-20 22:42:43.477464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.983 [2024-11-20 22:42:43.477510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:42.983 [2024-11-20 22:42:43.486347] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fe720 00:22:42.983 [2024-11-20 22:42:43.487492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.983 [2024-11-20 22:42:43.487526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:42.983 [2024-11-20 22:42:43.493857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ff3c8 00:22:42.983 [2024-11-20 22:42:43.494485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.983 [2024-11-20 22:42:43.494537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.983 [2024-11-20 22:42:43.503475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f5be8 00:22:42.983 [2024-11-20 22:42:43.503952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.983 [2024-11-20 22:42:43.503982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.983 [2024-11-20 22:42:43.514524] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f35f0 00:22:42.983 [2024-11-20 22:42:43.515780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.983 [2024-11-20 22:42:43.515813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.983 [2024-11-20 22:42:43.523773] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e5a90 00:22:42.983 [2024-11-20 22:42:43.525260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.983 [2024-11-20 22:42:43.525315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:42.983 [2024-11-20 22:42:43.532220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e4de8 00:22:42.983 [2024-11-20 22:42:43.533572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.983 [2024-11-20 22:42:43.533613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:42.983 [2024-11-20 22:42:43.541199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e8d30 00:22:42.983 [2024-11-20 22:42:43.542515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.983 [2024-11-20 22:42:43.542561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:42.983 [2024-11-20 22:42:43.551319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f9b30 00:22:42.983 [2024-11-20 22:42:43.552844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.983 [2024-11-20 22:42:43.552878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:42.983 [2024-11-20 22:42:43.561317] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e12d8 00:22:42.983 [2024-11-20 22:42:43.563103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.983 [2024-11-20 22:42:43.563137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.983 [2024-11-20 22:42:43.570554] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fc998 00:22:42.983 [2024-11-20 22:42:43.572223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.983 [2024-11-20 22:42:43.572255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:42.983 [2024-11-20 22:42:43.579789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e1f80 00:22:42.983 [2024-11-20 22:42:43.581268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.983 [2024-11-20 22:42:43.581325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:42.983 [2024-11-20 22:42:43.587897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f3e60 00:22:42.983 [2024-11-20 22:42:43.589003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.983 [2024-11-20 22:42:43.589037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:42.983 [2024-11-20 22:42:43.598018] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190de470 00:22:42.983 [2024-11-20 22:42:43.599295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.983 [2024-11-20 22:42:43.599324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:42.983 [2024-11-20 22:42:43.607739] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190edd58 00:22:42.983 [2024-11-20 22:42:43.608681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.983 [2024-11-20 22:42:43.608713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:42.984 [2024-11-20 22:42:43.616061] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f4f40 00:22:42.984 [2024-11-20 22:42:43.616832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.984 [2024-11-20 22:42:43.616865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:42.984 [2024-11-20 22:42:43.625320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ef270 00:22:42.984 [2024-11-20 22:42:43.625846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.984 [2024-11-20 22:42:43.625877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:42.984 [2024-11-20 22:42:43.634764] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e3d08 00:22:42.984 [2024-11-20 22:42:43.635225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.984 [2024-11-20 22:42:43.635255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:42.984 [2024-11-20 22:42:43.643922] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f3a28 00:22:42.984 [2024-11-20 22:42:43.644402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.984 [2024-11-20 22:42:43.644432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:42.984 [2024-11-20 22:42:43.653374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e6300 00:22:42.984 [2024-11-20 22:42:43.653870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.984 [2024-11-20 22:42:43.653899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:42.984 [2024-11-20 22:42:43.662594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ec840 00:22:42.984 [2024-11-20 22:42:43.663680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.984 [2024-11-20 22:42:43.663713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:42.984 [2024-11-20 22:42:43.671893] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f6020 00:22:42.984 [2024-11-20 22:42:43.672683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.984 [2024-11-20 22:42:43.672726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:42.984 [2024-11-20 22:42:43.680460] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ea248 00:22:42.984 [2024-11-20 22:42:43.680624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.984 [2024-11-20 22:42:43.680646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:42.984 [2024-11-20 22:42:43.690269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f0350 00:22:42.984 [2024-11-20 22:42:43.691322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.984 [2024-11-20 22:42:43.691355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:42.984 [2024-11-20 22:42:43.699430] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e0630 00:22:42.984 [2024-11-20 22:42:43.699585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.984 [2024-11-20 22:42:43.699606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:42.984 [2024-11-20 22:42:43.711465] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e84c0 00:22:42.984 [2024-11-20 22:42:43.712643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.984 [2024-11-20 22:42:43.712672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:43.243 [2024-11-20 22:42:43.720240] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f3e60 00:22:43.243 [2024-11-20 22:42:43.721562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.243 [2024-11-20 22:42:43.721607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:43.243 [2024-11-20 22:42:43.729662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ed0b0 00:22:43.243 [2024-11-20 22:42:43.731153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.243 [2024-11-20 22:42:43.731198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:43.243 [2024-11-20 22:42:43.739201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fa3a0 00:22:43.243 [2024-11-20 22:42:43.740568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.243 [2024-11-20 22:42:43.740602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:43.243 [2024-11-20 22:42:43.748064] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e9e10 00:22:43.243 [2024-11-20 22:42:43.748665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.243 [2024-11-20 22:42:43.748711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:43.243 [2024-11-20 22:42:43.758900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f6020 00:22:43.244 [2024-11-20 22:42:43.759939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.244 [2024-11-20 22:42:43.759971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:43.244 [2024-11-20 22:42:43.767198] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e4578 00:22:43.244 [2024-11-20 22:42:43.768053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.244 [2024-11-20 22:42:43.768086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:43.244 [2024-11-20 22:42:43.776542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fa7d8 00:22:43.244 [2024-11-20 22:42:43.777133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.244 [2024-11-20 22:42:43.777162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:43.244 [2024-11-20 22:42:43.785814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190eea00 00:22:43.244 [2024-11-20 22:42:43.786430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.244 [2024-11-20 22:42:43.786460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:43.244 [2024-11-20 22:42:43.795047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190de470 00:22:43.244 [2024-11-20 22:42:43.795597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.244 [2024-11-20 22:42:43.795631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:43.244 [2024-11-20 22:42:43.804288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e99d8 00:22:43.244 [2024-11-20 22:42:43.804860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.244 [2024-11-20 22:42:43.804893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:43.244 [2024-11-20 22:42:43.813612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e9168 00:22:43.244 [2024-11-20 22:42:43.814218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.244 [2024-11-20 22:42:43.814248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:43.244 [2024-11-20 22:42:43.822791] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f57b0 00:22:43.244 [2024-11-20 22:42:43.823519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.244 [2024-11-20 22:42:43.823556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:43.244 [2024-11-20 22:42:43.831854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ed0b0 00:22:43.244 [2024-11-20 22:42:43.832869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.244 [2024-11-20 22:42:43.832904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:43.244 [2024-11-20 22:42:43.841831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fb048 00:22:43.244 [2024-11-20 22:42:43.842450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.244 [2024-11-20 22:42:43.842480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:43.244 [2024-11-20 22:42:43.851113] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fa7d8 00:22:43.244 [2024-11-20 22:42:43.851852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.244 [2024-11-20 22:42:43.851887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:43.244 [2024-11-20 22:42:43.859191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f5be8 00:22:43.244 [2024-11-20 22:42:43.859525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.244 [2024-11-20 22:42:43.859553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:43.244 [2024-11-20 22:42:43.870189] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f20d8 00:22:43.244 [2024-11-20 22:42:43.871581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.244 [2024-11-20 22:42:43.871615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:43.244 [2024-11-20 22:42:43.879612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e0630 00:22:43.244 [2024-11-20 22:42:43.880949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.244 [2024-11-20 22:42:43.880982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:43.244 [2024-11-20 22:42:43.889160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e1f80 00:22:43.244 [2024-11-20 22:42:43.890888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.244 [2024-11-20 22:42:43.890923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:43.244 [2024-11-20 22:42:43.898215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e1f80 00:22:43.244 [2024-11-20 22:42:43.898857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.244 [2024-11-20 22:42:43.898899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:43.244 [2024-11-20 22:42:43.907364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ddc00 00:22:43.244 [2024-11-20 22:42:43.908324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.244 [2024-11-20 22:42:43.908357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:43.244 [2024-11-20 22:42:43.917889] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fd208 00:22:43.244 [2024-11-20 22:42:43.918984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.244 [2024-11-20 22:42:43.919017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:43.244 [2024-11-20 22:42:43.926674] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e9e10 00:22:43.244 [2024-11-20 22:42:43.927581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.244 [2024-11-20 22:42:43.927616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:43.244 [2024-11-20 22:42:43.936780] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e6b70 00:22:43.244 [2024-11-20 22:42:43.937726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.244 [2024-11-20 22:42:43.937761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:43.244 [2024-11-20 22:42:43.947106] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ef6a8 00:22:43.244 [2024-11-20 22:42:43.948228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.244 [2024-11-20 22:42:43.948262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:43.244 [2024-11-20 22:42:43.957928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f7da8 00:22:43.244 [2024-11-20 22:42:43.958603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.244 [2024-11-20 22:42:43.958636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:43.244 [2024-11-20 22:42:43.967869] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f81e0 00:22:43.244 [2024-11-20 22:42:43.969374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.244 [2024-11-20 22:42:43.969407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:43.504 [2024-11-20 22:42:43.976400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190eff18 00:22:43.504 [2024-11-20 22:42:43.976548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.504 [2024-11-20 22:42:43.976569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:43.504 [2024-11-20 22:42:43.986331] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fe720 00:22:43.504 [2024-11-20 22:42:43.987016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.504 [2024-11-20 22:42:43.987051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:43.504 [2024-11-20 22:42:43.995692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e99d8 00:22:43.504 [2024-11-20 22:42:43.996452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.504 [2024-11-20 22:42:43.996488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:43.504 [2024-11-20 22:42:44.004990] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f7da8 00:22:43.504 [2024-11-20 22:42:44.005706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.504 [2024-11-20 22:42:44.005740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:43.504 [2024-11-20 22:42:44.015961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e7c50 00:22:43.504 [2024-11-20 22:42:44.016713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.504 [2024-11-20 22:42:44.016747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:43.504 [2024-11-20 22:42:44.024851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e3d08 00:22:43.504 [2024-11-20 22:42:44.025976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.504 [2024-11-20 22:42:44.026021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:43.504 [2024-11-20 22:42:44.034478] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ef6a8 00:22:43.504 [2024-11-20 22:42:44.034947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.504 [2024-11-20 22:42:44.034976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:43.504 [2024-11-20 22:42:44.045218] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190de038 00:22:43.504 [2024-11-20 22:42:44.046245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.504 [2024-11-20 22:42:44.046286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:43.504 [2024-11-20 22:42:44.054038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f0788 00:22:43.504 [2024-11-20 22:42:44.055369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.504 [2024-11-20 22:42:44.055402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.504 [2024-11-20 22:42:44.063646] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e6fa8 00:22:43.504 [2024-11-20 22:42:44.064363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.504 [2024-11-20 22:42:44.064393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:43.504 [2024-11-20 22:42:44.073061] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fe720 00:22:43.504 [2024-11-20 22:42:44.073974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.504 [2024-11-20 22:42:44.074027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:43.504 [2024-11-20 22:42:44.083505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ea680 00:22:43.504 [2024-11-20 22:42:44.084506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.504 [2024-11-20 22:42:44.084539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.504 [2024-11-20 22:42:44.090459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fcdd0 00:22:43.504 [2024-11-20 22:42:44.090553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.504 [2024-11-20 22:42:44.090575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:43.504 [2024-11-20 22:42:44.100213] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f5378 00:22:43.504 [2024-11-20 22:42:44.100998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.504 [2024-11-20 22:42:44.101033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:43.504 [2024-11-20 22:42:44.110605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f7100 00:22:43.504 [2024-11-20 22:42:44.111145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.504 [2024-11-20 22:42:44.111174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:43.504 [2024-11-20 22:42:44.120024] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ed920 00:22:43.504 [2024-11-20 22:42:44.120751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.504 [2024-11-20 22:42:44.120780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:43.504 [2024-11-20 22:42:44.129300] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fdeb0 00:22:43.504 [2024-11-20 22:42:44.130513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.505 [2024-11-20 22:42:44.130549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:43.505 [2024-11-20 22:42:44.139350] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190eb760 00:22:43.505 [2024-11-20 22:42:44.140872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.505 [2024-11-20 22:42:44.140906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:43.505 [2024-11-20 22:42:44.149995] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f92c0 00:22:43.505 [2024-11-20 22:42:44.150919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.505 [2024-11-20 22:42:44.150956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:43.505 [2024-11-20 22:42:44.159442] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f0bc0 00:22:43.505 [2024-11-20 22:42:44.160601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.505 [2024-11-20 22:42:44.160644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:43.505 [2024-11-20 22:42:44.168929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ea248 00:22:43.505 [2024-11-20 22:42:44.170204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.505 [2024-11-20 22:42:44.170238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:43.505 [2024-11-20 22:42:44.179192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f4298 00:22:43.505 [2024-11-20 22:42:44.179995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.505 [2024-11-20 22:42:44.180029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:43.505 [2024-11-20 22:42:44.187409] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e8d30 00:22:43.505 [2024-11-20 22:42:44.188353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.505 [2024-11-20 22:42:44.188398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:43.505 [2024-11-20 22:42:44.197674] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f0bc0 00:22:43.505 [2024-11-20 22:42:44.198868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.505 [2024-11-20 22:42:44.198901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.505 [2024-11-20 22:42:44.207321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e9168 00:22:43.505 [2024-11-20 22:42:44.208012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.505 [2024-11-20 22:42:44.208044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:43.505 [2024-11-20 22:42:44.218074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ea680 00:22:43.505 [2024-11-20 22:42:44.219244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.505 [2024-11-20 22:42:44.219289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:43.505 [2024-11-20 22:42:44.224846] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ec408 00:22:43.505 [2024-11-20 22:42:44.225652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.505 [2024-11-20 22:42:44.225687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:43.505 [2024-11-20 22:42:44.234814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f35f0 00:22:43.764 [2024-11-20 22:42:44.234943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.764 [2024-11-20 22:42:44.234964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:43.764 [2024-11-20 22:42:44.244811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e3d08 00:22:43.764 [2024-11-20 22:42:44.245483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.764 [2024-11-20 22:42:44.245518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:43.764 [2024-11-20 22:42:44.254359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fef90 00:22:43.764 [2024-11-20 22:42:44.255029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.764 [2024-11-20 22:42:44.255063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:43.764 [2024-11-20 22:42:44.264566] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e1b48 00:22:43.764 [2024-11-20 22:42:44.265854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.764 [2024-11-20 22:42:44.265888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:43.764 [2024-11-20 22:42:44.273890] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e1f80 00:22:43.764 [2024-11-20 22:42:44.274469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.764 [2024-11-20 22:42:44.274497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:43.764 [2024-11-20 22:42:44.283272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e6300 00:22:43.764 [2024-11-20 22:42:44.284094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.764 [2024-11-20 22:42:44.284127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:43.764 [2024-11-20 22:42:44.292562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e84c0 00:22:43.764 [2024-11-20 22:42:44.293125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.764 [2024-11-20 22:42:44.293155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:43.764 [2024-11-20 22:42:44.302222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e0a68 00:22:43.764 [2024-11-20 22:42:44.302777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.764 [2024-11-20 22:42:44.302807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:43.764 [2024-11-20 22:42:44.311638] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ddc00 00:22:43.765 [2024-11-20 22:42:44.312179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.765 [2024-11-20 22:42:44.312209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 22:42:44.323511] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f1868 00:22:43.765 [2024-11-20 22:42:44.325078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.765 [2024-11-20 22:42:44.325113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 22:42:44.334689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f1430 00:22:43.765 [2024-11-20 22:42:44.335889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.765 [2024-11-20 22:42:44.335922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 22:42:44.342219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f3e60 00:22:43.765 [2024-11-20 22:42:44.342965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.765 [2024-11-20 22:42:44.343000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 22:42:44.352463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190fc560 00:22:43.765 [2024-11-20 22:42:44.353627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.765 [2024-11-20 22:42:44.353660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 22:42:44.361155] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ef6a8 00:22:43.765 [2024-11-20 22:42:44.361580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.765 [2024-11-20 22:42:44.361611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 22:42:44.371601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190eff18 00:22:43.765 [2024-11-20 22:42:44.372138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.765 [2024-11-20 22:42:44.372167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 22:42:44.381730] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f96f8 00:22:43.765 [2024-11-20 22:42:44.383443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.765 [2024-11-20 22:42:44.383476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 22:42:44.391135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190de8a8 00:22:43.765 [2024-11-20 22:42:44.391903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.765 [2024-11-20 22:42:44.391937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 22:42:44.399467] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e0630 00:22:43.765 [2024-11-20 22:42:44.399737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.765 [2024-11-20 22:42:44.399767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 22:42:44.410188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e9e10 00:22:43.765 [2024-11-20 22:42:44.411374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.765 [2024-11-20 22:42:44.411408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 22:42:44.420717] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f0ff8 00:22:43.765 [2024-11-20 22:42:44.422425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.765 [2024-11-20 22:42:44.422459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 22:42:44.430019] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e3060 00:22:43.765 [2024-11-20 22:42:44.431727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.765 [2024-11-20 22:42:44.431760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 22:42:44.439588] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f0bc0 00:22:43.765 [2024-11-20 22:42:44.441008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.765 [2024-11-20 22:42:44.441041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 22:42:44.448100] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f8e88 00:22:43.765 [2024-11-20 22:42:44.449144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.765 [2024-11-20 22:42:44.449177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 22:42:44.457091] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190e9e10 00:22:43.765 [2024-11-20 22:42:44.458326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.765 [2024-11-20 22:42:44.458357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 22:42:44.466753] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190f9f68 00:22:43.765 [2024-11-20 22:42:44.467012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.765 [2024-11-20 22:42:44.467059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:43.765 [2024-11-20 22:42:44.476342] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2a00) with pdu=0x2000190ea248 00:22:43.765 [2024-11-20 22:42:44.476738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.765 [2024-11-20 22:42:44.476769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:43.765 00:22:43.765 Latency(us) 00:22:43.765 [2024-11-20T22:42:44.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.765 [2024-11-20T22:42:44.499Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:43.765 nvme0n1 : 2.00 26710.49 104.34 0.00 0.00 4787.16 1966.08 12034.79 00:22:43.765 [2024-11-20T22:42:44.499Z] =================================================================================================================== 00:22:43.765 [2024-11-20T22:42:44.499Z] Total : 26710.49 104.34 0.00 0.00 4787.16 1966.08 12034.79 00:22:43.765 0 00:22:44.024 22:42:44 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:44.024 22:42:44 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:44.024 22:42:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:44.024 22:42:44 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:44.024 | .driver_specific 00:22:44.024 | .nvme_error 00:22:44.024 | .status_code 00:22:44.024 | .command_transient_transport_error' 00:22:44.283 22:42:44 -- host/digest.sh@71 -- # (( 209 > 0 )) 00:22:44.283 22:42:44 -- host/digest.sh@73 -- # killprocess 97727 00:22:44.283 22:42:44 -- common/autotest_common.sh@936 -- # '[' -z 97727 ']' 00:22:44.283 22:42:44 -- common/autotest_common.sh@940 -- # kill -0 97727 00:22:44.283 22:42:44 -- common/autotest_common.sh@941 -- # uname 00:22:44.283 22:42:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:44.283 22:42:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97727 00:22:44.283 22:42:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:44.283 killing process with pid 97727 00:22:44.283 22:42:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:44.283 22:42:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97727' 00:22:44.283 Received shutdown signal, test time was about 2.000000 seconds 00:22:44.283 00:22:44.283 Latency(us) 00:22:44.283 [2024-11-20T22:42:45.017Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.283 [2024-11-20T22:42:45.017Z] =================================================================================================================== 00:22:44.283 [2024-11-20T22:42:45.017Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:44.283 22:42:44 -- common/autotest_common.sh@955 -- # kill 97727 00:22:44.283 22:42:44 -- common/autotest_common.sh@960 -- # wait 97727 00:22:44.542 22:42:45 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:22:44.542 22:42:45 -- host/digest.sh@54 -- # local rw bs qd 00:22:44.542 22:42:45 -- host/digest.sh@56 -- # rw=randwrite 00:22:44.542 22:42:45 -- host/digest.sh@56 -- # bs=131072 00:22:44.542 22:42:45 -- host/digest.sh@56 -- # qd=16 00:22:44.542 22:42:45 -- host/digest.sh@58 -- # bperfpid=97823 00:22:44.542 22:42:45 -- host/digest.sh@60 -- # waitforlisten 97823 /var/tmp/bperf.sock 00:22:44.542 22:42:45 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:22:44.542 22:42:45 -- common/autotest_common.sh@829 -- # '[' -z 97823 ']' 00:22:44.542 22:42:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:44.542 22:42:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:44.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:44.542 22:42:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:44.542 22:42:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:44.542 22:42:45 -- common/autotest_common.sh@10 -- # set +x 00:22:44.542 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:44.542 Zero copy mechanism will not be used. 00:22:44.542 [2024-11-20 22:42:45.136135] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:44.542 [2024-11-20 22:42:45.136222] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97823 ] 00:22:44.542 [2024-11-20 22:42:45.267021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.801 [2024-11-20 22:42:45.328118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.369 22:42:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:45.369 22:42:46 -- common/autotest_common.sh@862 -- # return 0 00:22:45.369 22:42:46 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:45.369 22:42:46 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:45.627 22:42:46 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:45.627 22:42:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.627 22:42:46 -- common/autotest_common.sh@10 -- # set +x 00:22:45.627 22:42:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.627 22:42:46 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:45.627 22:42:46 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:45.886 nvme0n1 00:22:46.146 22:42:46 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:46.146 22:42:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.146 22:42:46 -- common/autotest_common.sh@10 -- # set +x 00:22:46.146 22:42:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.146 22:42:46 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:46.146 22:42:46 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:46.146 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:46.146 Zero copy mechanism will not be used. 00:22:46.146 Running I/O for 2 seconds... 00:22:46.146 [2024-11-20 22:42:46.740814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.146 [2024-11-20 22:42:46.741196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.146 [2024-11-20 22:42:46.741255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.146 [2024-11-20 22:42:46.745375] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.146 [2024-11-20 22:42:46.745697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.146 [2024-11-20 22:42:46.745736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.146 [2024-11-20 22:42:46.749738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.146 [2024-11-20 22:42:46.749931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.146 [2024-11-20 22:42:46.749955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.146 [2024-11-20 22:42:46.754500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.146 [2024-11-20 22:42:46.754671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.146 [2024-11-20 22:42:46.754695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.146 [2024-11-20 22:42:46.758868] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.146 [2024-11-20 22:42:46.758987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.146 [2024-11-20 22:42:46.759020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.146 [2024-11-20 22:42:46.763451] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.146 [2024-11-20 22:42:46.763549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.146 [2024-11-20 22:42:46.763583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.146 [2024-11-20 22:42:46.767864] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.146 [2024-11-20 22:42:46.768022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.146 [2024-11-20 22:42:46.768045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.146 [2024-11-20 22:42:46.772353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.146 [2024-11-20 22:42:46.772549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.146 [2024-11-20 22:42:46.772572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.146 [2024-11-20 22:42:46.776620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.146 [2024-11-20 22:42:46.776782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.146 [2024-11-20 22:42:46.776805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.146 [2024-11-20 22:42:46.780944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.146 [2024-11-20 22:42:46.781080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.146 [2024-11-20 22:42:46.781114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.146 [2024-11-20 22:42:46.785590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.146 [2024-11-20 22:42:46.785740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.146 [2024-11-20 22:42:46.785763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.147 [2024-11-20 22:42:46.789948] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.147 [2024-11-20 22:42:46.790073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.147 [2024-11-20 22:42:46.790096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.147 [2024-11-20 22:42:46.794273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.147 [2024-11-20 22:42:46.794412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.147 [2024-11-20 22:42:46.794444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.147 [2024-11-20 22:42:46.798728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.147 [2024-11-20 22:42:46.798869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.147 [2024-11-20 22:42:46.798893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.147 [2024-11-20 22:42:46.803208] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.147 [2024-11-20 22:42:46.803371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.147 [2024-11-20 22:42:46.803396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.147 [2024-11-20 22:42:46.807822] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.147 [2024-11-20 22:42:46.808007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.147 [2024-11-20 22:42:46.808030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.147 [2024-11-20 22:42:46.812133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.147 [2024-11-20 22:42:46.812320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.147 [2024-11-20 22:42:46.812345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.147 [2024-11-20 22:42:46.816473] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.147 [2024-11-20 22:42:46.816600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.147 [2024-11-20 22:42:46.816624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.147 [2024-11-20 22:42:46.820877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.147 [2024-11-20 22:42:46.821026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.147 [2024-11-20 22:42:46.821050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.147 [2024-11-20 22:42:46.825208] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.147 [2024-11-20 22:42:46.825337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.147 [2024-11-20 22:42:46.825360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.147 [2024-11-20 22:42:46.829563] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.147 [2024-11-20 22:42:46.829671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.147 [2024-11-20 22:42:46.829695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.147 [2024-11-20 22:42:46.834050] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.147 [2024-11-20 22:42:46.834213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.147 [2024-11-20 22:42:46.834249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.147 [2024-11-20 22:42:46.838515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.147 [2024-11-20 22:42:46.838802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.147 [2024-11-20 22:42:46.838831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.147 [2024-11-20 22:42:46.842832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.147 [2024-11-20 22:42:46.843032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.147 [2024-11-20 22:42:46.843056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.147 [2024-11-20 22:42:46.847217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.147 [2024-11-20 22:42:46.847438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.147 [2024-11-20 22:42:46.847469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.147 [2024-11-20 22:42:46.851573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.147 [2024-11-20 22:42:46.851791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.147 [2024-11-20 22:42:46.851814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.147 [2024-11-20 22:42:46.855959] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.147 [2024-11-20 22:42:46.856076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.147 [2024-11-20 22:42:46.856099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.147 [2024-11-20 22:42:46.860270] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.147 [2024-11-20 22:42:46.860402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.147 [2024-11-20 22:42:46.860426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.147 [2024-11-20 22:42:46.864569] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.147 [2024-11-20 22:42:46.864669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.147 [2024-11-20 22:42:46.864692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.147 [2024-11-20 22:42:46.869150] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.147 [2024-11-20 22:42:46.869320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.147 [2024-11-20 22:42:46.869344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.147 [2024-11-20 22:42:46.873738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.147 [2024-11-20 22:42:46.873995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.147 [2024-11-20 22:42:46.874019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.408 [2024-11-20 22:42:46.878827] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.408 [2024-11-20 22:42:46.879026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.408 [2024-11-20 22:42:46.879049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.408 [2024-11-20 22:42:46.883487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.408 [2024-11-20 22:42:46.883677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.408 [2024-11-20 22:42:46.883700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.408 [2024-11-20 22:42:46.887775] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.408 [2024-11-20 22:42:46.887884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.408 [2024-11-20 22:42:46.887908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.408 [2024-11-20 22:42:46.892549] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.408 [2024-11-20 22:42:46.892706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.408 [2024-11-20 22:42:46.892730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.408 [2024-11-20 22:42:46.896908] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.408 [2024-11-20 22:42:46.897033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.408 [2024-11-20 22:42:46.897058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.408 [2024-11-20 22:42:46.901164] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.408 [2024-11-20 22:42:46.901317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.408 [2024-11-20 22:42:46.901341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.408 [2024-11-20 22:42:46.905581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.408 [2024-11-20 22:42:46.905712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.408 [2024-11-20 22:42:46.905735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.408 [2024-11-20 22:42:46.909866] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.408 [2024-11-20 22:42:46.910061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.408 [2024-11-20 22:42:46.910084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.408 [2024-11-20 22:42:46.914252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.408 [2024-11-20 22:42:46.914449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.408 [2024-11-20 22:42:46.914473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.408 [2024-11-20 22:42:46.918483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.408 [2024-11-20 22:42:46.918688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.408 [2024-11-20 22:42:46.918711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.408 [2024-11-20 22:42:46.922684] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.408 [2024-11-20 22:42:46.922821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.408 [2024-11-20 22:42:46.922844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.408 [2024-11-20 22:42:46.926892] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.408 [2024-11-20 22:42:46.926983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.408 [2024-11-20 22:42:46.927006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.408 [2024-11-20 22:42:46.931233] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.408 [2024-11-20 22:42:46.931384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.408 [2024-11-20 22:42:46.931408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.408 [2024-11-20 22:42:46.935431] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.408 [2024-11-20 22:42:46.935530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.408 [2024-11-20 22:42:46.935553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.408 [2024-11-20 22:42:46.939827] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.408 [2024-11-20 22:42:46.939974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.408 [2024-11-20 22:42:46.939998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.408 [2024-11-20 22:42:46.943947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.408 [2024-11-20 22:42:46.944186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.408 [2024-11-20 22:42:46.944220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.408 [2024-11-20 22:42:46.948228] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.408 [2024-11-20 22:42:46.948434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.408 [2024-11-20 22:42:46.948457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.408 [2024-11-20 22:42:46.952429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:46.952685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:46.952713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:46.956624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:46.956813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:46.956835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:46.960844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:46.960963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:46.960986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:46.965056] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:46.965172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:46.965195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:46.969302] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:46.969434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:46.969456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:46.973533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:46.973676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:46.973700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:46.977729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:46.978008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:46.978042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:46.982027] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:46.982222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:46.982245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:46.986598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:46.986795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:46.986818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:46.991025] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:46.991227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:46.991250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:46.995867] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:46.996043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:46.996064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:47.000756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:47.000862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:47.000885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:47.005476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:47.005580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:47.005605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:47.010387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:47.010566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:47.010589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:47.014944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:47.015145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:47.015167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:47.019714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:47.019891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:47.019915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:47.024266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:47.024484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:47.024506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:47.028694] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:47.028784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:47.028807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:47.033241] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:47.033405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:47.033429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:47.037571] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:47.037749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:47.037772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:47.041939] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:47.042056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:47.042079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:47.046270] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:47.046443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:47.046467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:47.050466] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:47.050698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:47.050721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:47.054779] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:47.054981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:47.055004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:47.059033] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:47.059176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:47.059198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:47.063256] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:47.063407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:47.063429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:47.067458] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.409 [2024-11-20 22:42:47.067617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.409 [2024-11-20 22:42:47.067640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.409 [2024-11-20 22:42:47.071670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.410 [2024-11-20 22:42:47.071847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.410 [2024-11-20 22:42:47.071870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.410 [2024-11-20 22:42:47.075917] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.410 [2024-11-20 22:42:47.076068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.410 [2024-11-20 22:42:47.076090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.410 [2024-11-20 22:42:47.080214] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.410 [2024-11-20 22:42:47.080390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.410 [2024-11-20 22:42:47.080426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.410 [2024-11-20 22:42:47.084531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.410 [2024-11-20 22:42:47.084817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.410 [2024-11-20 22:42:47.084860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.410 [2024-11-20 22:42:47.088939] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.410 [2024-11-20 22:42:47.089074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.410 [2024-11-20 22:42:47.089096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.410 [2024-11-20 22:42:47.093180] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.410 [2024-11-20 22:42:47.093382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.410 [2024-11-20 22:42:47.093405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.410 [2024-11-20 22:42:47.097384] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.410 [2024-11-20 22:42:47.097472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.410 [2024-11-20 22:42:47.097495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.410 [2024-11-20 22:42:47.101632] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.410 [2024-11-20 22:42:47.101772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.410 [2024-11-20 22:42:47.101804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.410 [2024-11-20 22:42:47.105950] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.410 [2024-11-20 22:42:47.106105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.410 [2024-11-20 22:42:47.106133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.410 [2024-11-20 22:42:47.110183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.410 [2024-11-20 22:42:47.110353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.410 [2024-11-20 22:42:47.110376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.410 [2024-11-20 22:42:47.114496] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.410 [2024-11-20 22:42:47.114668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.410 [2024-11-20 22:42:47.114692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.410 [2024-11-20 22:42:47.118710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.410 [2024-11-20 22:42:47.118922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.410 [2024-11-20 22:42:47.118946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.410 [2024-11-20 22:42:47.122907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.410 [2024-11-20 22:42:47.123088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.410 [2024-11-20 22:42:47.123111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.410 [2024-11-20 22:42:47.127077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.410 [2024-11-20 22:42:47.127204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.410 [2024-11-20 22:42:47.127227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.410 [2024-11-20 22:42:47.131191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.410 [2024-11-20 22:42:47.131288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.410 [2024-11-20 22:42:47.131311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.410 [2024-11-20 22:42:47.135710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.410 [2024-11-20 22:42:47.135892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.410 [2024-11-20 22:42:47.135916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.140328] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.671 [2024-11-20 22:42:47.140441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.671 [2024-11-20 22:42:47.140463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.144679] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.671 [2024-11-20 22:42:47.144821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.671 [2024-11-20 22:42:47.144844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.149002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.671 [2024-11-20 22:42:47.149161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.671 [2024-11-20 22:42:47.149184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.153262] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.671 [2024-11-20 22:42:47.153477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.671 [2024-11-20 22:42:47.153520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.157550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.671 [2024-11-20 22:42:47.157645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.671 [2024-11-20 22:42:47.157667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.161957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.671 [2024-11-20 22:42:47.162077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.671 [2024-11-20 22:42:47.162099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.166256] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.671 [2024-11-20 22:42:47.166379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.671 [2024-11-20 22:42:47.166402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.170522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.671 [2024-11-20 22:42:47.170682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.671 [2024-11-20 22:42:47.170704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.174734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.671 [2024-11-20 22:42:47.174848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.671 [2024-11-20 22:42:47.174871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.178893] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.671 [2024-11-20 22:42:47.178996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.671 [2024-11-20 22:42:47.179018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.183213] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.671 [2024-11-20 22:42:47.183380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.671 [2024-11-20 22:42:47.183404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.187405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.671 [2024-11-20 22:42:47.187665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.671 [2024-11-20 22:42:47.187700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.191586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.671 [2024-11-20 22:42:47.191666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.671 [2024-11-20 22:42:47.191689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.195742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.671 [2024-11-20 22:42:47.195909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.671 [2024-11-20 22:42:47.195932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.199968] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.671 [2024-11-20 22:42:47.200129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.671 [2024-11-20 22:42:47.200152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.204161] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.671 [2024-11-20 22:42:47.204342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.671 [2024-11-20 22:42:47.204365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.208378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.671 [2024-11-20 22:42:47.208503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.671 [2024-11-20 22:42:47.208526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.212516] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.671 [2024-11-20 22:42:47.212654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.671 [2024-11-20 22:42:47.212677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.216787] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.671 [2024-11-20 22:42:47.216941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.671 [2024-11-20 22:42:47.216965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.221051] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.671 [2024-11-20 22:42:47.221248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.671 [2024-11-20 22:42:47.221272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.225384] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.671 [2024-11-20 22:42:47.225634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.671 [2024-11-20 22:42:47.225664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.229700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.671 [2024-11-20 22:42:47.229872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.671 [2024-11-20 22:42:47.229894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.234200] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.671 [2024-11-20 22:42:47.234354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.671 [2024-11-20 22:42:47.234377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.238562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.671 [2024-11-20 22:42:47.238702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.671 [2024-11-20 22:42:47.238725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.242806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.671 [2024-11-20 22:42:47.242923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.671 [2024-11-20 22:42:47.242947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.671 [2024-11-20 22:42:47.247077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.247195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.247218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.251266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.251450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.251473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.255596] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.255768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.255791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.259824] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.259969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.259992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.264138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.264259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.264299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.268307] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.268440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.268463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.272716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.272866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.272889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.277082] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.277189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.277212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.281391] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.281516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.281540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.285609] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.285743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.285765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.289896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.289998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.290021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.294253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.294400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.294423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.298571] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.298679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.298702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.303064] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.303200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.303234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.307527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.307686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.307709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.311831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.311944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.311967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.316002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.316159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.316182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.320357] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.320497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.320520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.324580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.324668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.324691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.328832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.328962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.328984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.333151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.333253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.333291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.337377] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.337457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.337481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.341716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.341857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.341880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.346118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.346338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.346361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.350538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.350677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.350700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.354865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.355015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.355037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.359184] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.359351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.672 [2024-11-20 22:42:47.359374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.672 [2024-11-20 22:42:47.363512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.672 [2024-11-20 22:42:47.363685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.673 [2024-11-20 22:42:47.363708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.673 [2024-11-20 22:42:47.367794] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.673 [2024-11-20 22:42:47.367932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.673 [2024-11-20 22:42:47.367955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.673 [2024-11-20 22:42:47.372086] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.673 [2024-11-20 22:42:47.372233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.673 [2024-11-20 22:42:47.372255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.673 [2024-11-20 22:42:47.376324] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.673 [2024-11-20 22:42:47.376458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.673 [2024-11-20 22:42:47.376481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.673 [2024-11-20 22:42:47.380519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.673 [2024-11-20 22:42:47.380655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.673 [2024-11-20 22:42:47.380679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.673 [2024-11-20 22:42:47.384932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.673 [2024-11-20 22:42:47.385077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.673 [2024-11-20 22:42:47.385101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.673 [2024-11-20 22:42:47.389298] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.673 [2024-11-20 22:42:47.389435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.673 [2024-11-20 22:42:47.389459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.673 [2024-11-20 22:42:47.393595] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.673 [2024-11-20 22:42:47.393718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.673 [2024-11-20 22:42:47.393741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.673 [2024-11-20 22:42:47.398245] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.673 [2024-11-20 22:42:47.398419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.673 [2024-11-20 22:42:47.398442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.933 [2024-11-20 22:42:47.403003] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.933 [2024-11-20 22:42:47.403089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.933 [2024-11-20 22:42:47.403111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.933 [2024-11-20 22:42:47.407392] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.933 [2024-11-20 22:42:47.407545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.933 [2024-11-20 22:42:47.407568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.933 [2024-11-20 22:42:47.411758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.933 [2024-11-20 22:42:47.411892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.933 [2024-11-20 22:42:47.411915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.933 [2024-11-20 22:42:47.416033] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.933 [2024-11-20 22:42:47.416135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.933 [2024-11-20 22:42:47.416157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.933 [2024-11-20 22:42:47.420311] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.933 [2024-11-20 22:42:47.420421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.933 [2024-11-20 22:42:47.420443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.933 [2024-11-20 22:42:47.424623] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.933 [2024-11-20 22:42:47.424754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.933 [2024-11-20 22:42:47.424778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.933 [2024-11-20 22:42:47.428920] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.933 [2024-11-20 22:42:47.429046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.933 [2024-11-20 22:42:47.429068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.433234] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.433424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.433447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.437494] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.437630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.437653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.441688] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.441818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.441853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.446039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.446183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.446206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.450385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.450484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.450507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.454690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.454850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.454873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.458949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.459115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.459138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.463214] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.463367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.463391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.467528] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.467681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.467703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.471824] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.471950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.471973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.476047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.476212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.476234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.480297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.480469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.480492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.484566] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.484658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.484680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.488832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.488940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.488963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.493108] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.493238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.493261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.497418] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.497555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.497578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.501736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.501923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.501946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.506060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.506215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.506246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.510399] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.510513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.510536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.514627] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.514773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.514796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.518987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.519086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.519108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.523174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.523333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.523356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.527449] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.527598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.527622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.531839] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.531961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.531984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.536112] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.536243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.536265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.540439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.540545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.540568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.544696] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.934 [2024-11-20 22:42:47.544839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.934 [2024-11-20 22:42:47.544861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.934 [2024-11-20 22:42:47.548953] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.549084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.549107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.553198] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.553381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.553404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.557404] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.557550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.557573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.561677] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.561840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.561863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.566054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.566198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.566220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.570393] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.570546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.570569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.574650] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.574820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.574843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.578832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.578941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.578964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.583117] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.583248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.583271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.587418] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.587590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.587612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.591782] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.591947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.591970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.596030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.596138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.596160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.600310] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.600418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.600441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.604651] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.604791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.604815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.608929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.609022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.609045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.613209] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.613360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.613384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.617503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.617692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.617716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.621697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.621786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.621830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.625959] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.626094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.626117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.630321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.630443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.630466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.634719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.634902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.634925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.639009] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.639149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.639172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.643429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.643559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.643582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.647756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.647864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.647886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.652087] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.652250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.652274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.656517] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.656661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.656696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.935 [2024-11-20 22:42:47.661013] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:46.935 [2024-11-20 22:42:47.661181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.935 [2024-11-20 22:42:47.661205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.196 [2024-11-20 22:42:47.665752] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.196 [2024-11-20 22:42:47.665898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-20 22:42:47.665920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.196 [2024-11-20 22:42:47.670305] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.196 [2024-11-20 22:42:47.670467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-20 22:42:47.670490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.196 [2024-11-20 22:42:47.674707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.196 [2024-11-20 22:42:47.674856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-20 22:42:47.674879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.196 [2024-11-20 22:42:47.679082] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.196 [2024-11-20 22:42:47.679232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-20 22:42:47.679255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.196 [2024-11-20 22:42:47.683374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.196 [2024-11-20 22:42:47.683483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-20 22:42:47.683505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.196 [2024-11-20 22:42:47.687743] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.196 [2024-11-20 22:42:47.687874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-20 22:42:47.687898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.196 [2024-11-20 22:42:47.691925] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.196 [2024-11-20 22:42:47.692088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-20 22:42:47.692110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.196 [2024-11-20 22:42:47.696293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.196 [2024-11-20 22:42:47.696481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-20 22:42:47.696505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.196 [2024-11-20 22:42:47.700538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.196 [2024-11-20 22:42:47.700698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-20 22:42:47.700721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.196 [2024-11-20 22:42:47.704804] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.196 [2024-11-20 22:42:47.704975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-20 22:42:47.704999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.196 [2024-11-20 22:42:47.709048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.196 [2024-11-20 22:42:47.709186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-20 22:42:47.709210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.196 [2024-11-20 22:42:47.713510] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.196 [2024-11-20 22:42:47.713615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-20 22:42:47.713638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.196 [2024-11-20 22:42:47.717749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.196 [2024-11-20 22:42:47.717923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-20 22:42:47.717946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.196 [2024-11-20 22:42:47.722070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.196 [2024-11-20 22:42:47.722288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-20 22:42:47.722311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.726483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.726584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.726606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.730871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.731015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.731038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.735217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.735386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.735422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.739520] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.739632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.739655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.743877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.744011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.744034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.748287] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.748399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.748422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.752750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.752880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.752903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.756986] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.757125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.757147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.761223] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.761344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.761367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.765575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.765740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.765763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.769864] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.770006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.770028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.774328] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.774463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.774486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.778618] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.778775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.778797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.783018] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.783141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.783164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.787251] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.787373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.787396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.791539] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.791686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.791709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.795798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.795900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.795924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.800048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.800185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.800208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.804398] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.804497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.804520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.808875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.809004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.809028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.813237] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.813385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.813408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.817525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.817677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.817699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.821857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.822010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.822033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.826123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.826333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.826358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.830459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.830539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.830562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.834783] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.834941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.834980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.839025] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.197 [2024-11-20 22:42:47.839118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-20 22:42:47.839141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.197 [2024-11-20 22:42:47.843414] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.198 [2024-11-20 22:42:47.843561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.198 [2024-11-20 22:42:47.843585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.198 [2024-11-20 22:42:47.847719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.198 [2024-11-20 22:42:47.847853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.198 [2024-11-20 22:42:47.847876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.198 [2024-11-20 22:42:47.852074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.198 [2024-11-20 22:42:47.852158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.198 [2024-11-20 22:42:47.852181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.198 [2024-11-20 22:42:47.856344] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.198 [2024-11-20 22:42:47.856493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.198 [2024-11-20 22:42:47.856515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.198 [2024-11-20 22:42:47.860584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.198 [2024-11-20 22:42:47.860775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.198 [2024-11-20 22:42:47.860798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.198 [2024-11-20 22:42:47.864916] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.198 [2024-11-20 22:42:47.865033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.198 [2024-11-20 22:42:47.865055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.198 [2024-11-20 22:42:47.869131] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.198 [2024-11-20 22:42:47.869304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.198 [2024-11-20 22:42:47.869328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.198 [2024-11-20 22:42:47.873478] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.198 [2024-11-20 22:42:47.873593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.198 [2024-11-20 22:42:47.873615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.198 [2024-11-20 22:42:47.877686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.198 [2024-11-20 22:42:47.877841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.198 [2024-11-20 22:42:47.877865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.198 [2024-11-20 22:42:47.882005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.198 [2024-11-20 22:42:47.882163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.198 [2024-11-20 22:42:47.882185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.198 [2024-11-20 22:42:47.886508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.198 [2024-11-20 22:42:47.886640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.198 [2024-11-20 22:42:47.886663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.198 [2024-11-20 22:42:47.890803] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.198 [2024-11-20 22:42:47.890970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.198 [2024-11-20 22:42:47.890993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.198 [2024-11-20 22:42:47.895268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.198 [2024-11-20 22:42:47.895421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.198 [2024-11-20 22:42:47.895445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.198 [2024-11-20 22:42:47.899954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.198 [2024-11-20 22:42:47.900128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.198 [2024-11-20 22:42:47.900154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.198 [2024-11-20 22:42:47.904634] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.198 [2024-11-20 22:42:47.904793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.198 [2024-11-20 22:42:47.904816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.198 [2024-11-20 22:42:47.909375] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.198 [2024-11-20 22:42:47.909485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.198 [2024-11-20 22:42:47.909508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.198 [2024-11-20 22:42:47.914043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.198 [2024-11-20 22:42:47.914234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.198 [2024-11-20 22:42:47.914257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.198 [2024-11-20 22:42:47.918850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.198 [2024-11-20 22:42:47.918997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.198 [2024-11-20 22:42:47.919021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.198 [2024-11-20 22:42:47.923872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.198 [2024-11-20 22:42:47.923974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.198 [2024-11-20 22:42:47.923997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.458 [2024-11-20 22:42:47.928898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.458 [2024-11-20 22:42:47.929081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.458 [2024-11-20 22:42:47.929105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.458 [2024-11-20 22:42:47.933825] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:47.933971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:47.933994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:47.938226] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:47.938372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:47.938395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:47.942899] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:47.943078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:47.943102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:47.947269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:47.947434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:47.947458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:47.951555] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:47.951685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:47.951709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:47.955903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:47.956059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:47.956082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:47.960261] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:47.960386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:47.960408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:47.964763] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:47.964914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:47.964936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:47.969101] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:47.969262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:47.969301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:47.973558] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:47.973672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:47.973695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:47.978029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:47.978190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:47.978213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:47.982656] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:47.982772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:47.982795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:47.987026] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:47.987203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:47.987226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:47.991419] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:47.991568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:47.991592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:47.995668] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:47.995790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:47.995813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:48.000047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:48.000220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:48.000242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:48.004757] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:48.004874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:48.004897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:48.009184] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:48.009314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:48.009338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:48.014037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:48.014200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:48.014249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:48.018815] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:48.019007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:48.019047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:48.023881] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:48.024069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:48.024098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:48.028800] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:48.028949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:48.028972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:48.033449] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:48.033576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:48.033600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:48.038280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:48.038463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:48.038486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:48.042670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:48.042773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:48.042796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:48.047413] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:48.047568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:48.047592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:48.051845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.459 [2024-11-20 22:42:48.051985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.459 [2024-11-20 22:42:48.052009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.459 [2024-11-20 22:42:48.056295] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.056447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.056470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.060603] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.060775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.060797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.065184] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.065363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.065386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.069628] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.069790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.069833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.074195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.074378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.074402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.078667] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.078783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.078806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.083400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.083538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.083561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.087825] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.087972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.088003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.092268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.092419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.092442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.096607] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.096784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.096807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.101098] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.101252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.101288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.105622] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.105760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.105783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.110141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.110329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.110352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.114670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.114807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.114829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.119097] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.119251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.119287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.123748] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.123877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.123900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.128117] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.128240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.128263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.132533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.132688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.132710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.136875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.136972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.137001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.141216] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.141374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.141398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.145584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.145743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.145765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.149895] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.150049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.150073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.154461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.154624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.154648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.158796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.158919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.158941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.163295] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.163431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.163454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.167578] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.167776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.167798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.171919] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.460 [2024-11-20 22:42:48.172005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.460 [2024-11-20 22:42:48.172028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.460 [2024-11-20 22:42:48.176201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.461 [2024-11-20 22:42:48.176352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.461 [2024-11-20 22:42:48.176376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.461 [2024-11-20 22:42:48.180491] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.461 [2024-11-20 22:42:48.180602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.461 [2024-11-20 22:42:48.180625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.461 [2024-11-20 22:42:48.184887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.461 [2024-11-20 22:42:48.185029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.461 [2024-11-20 22:42:48.185051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.721 [2024-11-20 22:42:48.189742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.721 [2024-11-20 22:42:48.189981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.721 [2024-11-20 22:42:48.190004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.721 [2024-11-20 22:42:48.194250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.721 [2024-11-20 22:42:48.194394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.721 [2024-11-20 22:42:48.194428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.721 [2024-11-20 22:42:48.198809] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.721 [2024-11-20 22:42:48.198944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.721 [2024-11-20 22:42:48.198967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.721 [2024-11-20 22:42:48.203133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.721 [2024-11-20 22:42:48.203296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.721 [2024-11-20 22:42:48.203319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.721 [2024-11-20 22:42:48.207382] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.721 [2024-11-20 22:42:48.207522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.721 [2024-11-20 22:42:48.207545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.721 [2024-11-20 22:42:48.211705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.721 [2024-11-20 22:42:48.211877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.721 [2024-11-20 22:42:48.211900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.721 [2024-11-20 22:42:48.216053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.721 [2024-11-20 22:42:48.216177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.721 [2024-11-20 22:42:48.216200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.721 [2024-11-20 22:42:48.220270] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.721 [2024-11-20 22:42:48.220491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.721 [2024-11-20 22:42:48.220513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.721 [2024-11-20 22:42:48.224634] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.721 [2024-11-20 22:42:48.224797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.721 [2024-11-20 22:42:48.224819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.721 [2024-11-20 22:42:48.228893] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.721 [2024-11-20 22:42:48.228995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.721 [2024-11-20 22:42:48.229019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.721 [2024-11-20 22:42:48.233118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.721 [2024-11-20 22:42:48.233249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.721 [2024-11-20 22:42:48.233272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.721 [2024-11-20 22:42:48.237465] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.721 [2024-11-20 22:42:48.237602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.721 [2024-11-20 22:42:48.237625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.721 [2024-11-20 22:42:48.241657] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.721 [2024-11-20 22:42:48.241785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.721 [2024-11-20 22:42:48.241830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.721 [2024-11-20 22:42:48.245981] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.721 [2024-11-20 22:42:48.246121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.246154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.250260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.250405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.250435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.254552] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.254681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.254703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.258921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.259073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.259096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.263115] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.263252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.263288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.267451] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.267565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.267587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.271756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.271893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.271915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.276058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.276173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.276196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.280377] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.280525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.280547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.284624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.284773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.284796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.288808] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.289002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.289025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.293039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.293208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.293230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.297329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.297456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.297478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.301607] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.301767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.301789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.305998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.306130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.306153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.310288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.310421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.310444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.314519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.314652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.314675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.318861] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.318969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.318991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.323157] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.323269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.323311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.327414] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.327623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.327646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.331672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.331766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.331788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.335950] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.336124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.336147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.340351] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.340463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.340486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.344600] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.344757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.344780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.349034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.349189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.349211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.353389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.353495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.353519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.357713] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.357843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.722 [2024-11-20 22:42:48.357866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.722 [2024-11-20 22:42:48.362213] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.722 [2024-11-20 22:42:48.362363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.723 [2024-11-20 22:42:48.362387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.723 [2024-11-20 22:42:48.366482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.723 [2024-11-20 22:42:48.366578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.723 [2024-11-20 22:42:48.366601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.723 [2024-11-20 22:42:48.370852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.723 [2024-11-20 22:42:48.371031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.723 [2024-11-20 22:42:48.371053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.723 [2024-11-20 22:42:48.375103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.723 [2024-11-20 22:42:48.375225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.723 [2024-11-20 22:42:48.375248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.723 [2024-11-20 22:42:48.379517] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.723 [2024-11-20 22:42:48.379618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.723 [2024-11-20 22:42:48.379641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.723 [2024-11-20 22:42:48.383828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.723 [2024-11-20 22:42:48.384046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.723 [2024-11-20 22:42:48.384069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.723 [2024-11-20 22:42:48.388231] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.723 [2024-11-20 22:42:48.388359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.723 [2024-11-20 22:42:48.388382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.723 [2024-11-20 22:42:48.392486] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.723 [2024-11-20 22:42:48.392600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.723 [2024-11-20 22:42:48.392622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.723 [2024-11-20 22:42:48.396757] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.723 [2024-11-20 22:42:48.396891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.723 [2024-11-20 22:42:48.396914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.723 [2024-11-20 22:42:48.401017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.723 [2024-11-20 22:42:48.401131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.723 [2024-11-20 22:42:48.401153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.723 [2024-11-20 22:42:48.405215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.723 [2024-11-20 22:42:48.405371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.723 [2024-11-20 22:42:48.405394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.723 [2024-11-20 22:42:48.409439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.723 [2024-11-20 22:42:48.409574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.723 [2024-11-20 22:42:48.409596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.723 [2024-11-20 22:42:48.413603] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.723 [2024-11-20 22:42:48.413830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.723 [2024-11-20 22:42:48.413858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.723 [2024-11-20 22:42:48.418015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.723 [2024-11-20 22:42:48.418193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.723 [2024-11-20 22:42:48.418217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.723 [2024-11-20 22:42:48.422363] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.723 [2024-11-20 22:42:48.422453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.723 [2024-11-20 22:42:48.422476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.723 [2024-11-20 22:42:48.426583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.723 [2024-11-20 22:42:48.426695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.723 [2024-11-20 22:42:48.426718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.723 [2024-11-20 22:42:48.430840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.723 [2024-11-20 22:42:48.430976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.723 [2024-11-20 22:42:48.430998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.723 [2024-11-20 22:42:48.435071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.723 [2024-11-20 22:42:48.435224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.723 [2024-11-20 22:42:48.435246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.723 [2024-11-20 22:42:48.439365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.723 [2024-11-20 22:42:48.439501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.723 [2024-11-20 22:42:48.439524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.723 [2024-11-20 22:42:48.443611] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.723 [2024-11-20 22:42:48.443779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.723 [2024-11-20 22:42:48.443802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.723 [2024-11-20 22:42:48.448200] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.723 [2024-11-20 22:42:48.448300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.723 [2024-11-20 22:42:48.448323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.984 [2024-11-20 22:42:48.453055] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.984 [2024-11-20 22:42:48.453207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-20 22:42:48.453229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.984 [2024-11-20 22:42:48.457570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.984 [2024-11-20 22:42:48.457682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-20 22:42:48.457704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.984 [2024-11-20 22:42:48.462169] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.984 [2024-11-20 22:42:48.462391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-20 22:42:48.462413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.984 [2024-11-20 22:42:48.466517] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.984 [2024-11-20 22:42:48.466668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-20 22:42:48.466692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.984 [2024-11-20 22:42:48.470784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.984 [2024-11-20 22:42:48.470904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-20 22:42:48.470926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.984 [2024-11-20 22:42:48.475120] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.984 [2024-11-20 22:42:48.475324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-20 22:42:48.475347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.984 [2024-11-20 22:42:48.479419] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.984 [2024-11-20 22:42:48.479525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-20 22:42:48.479548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.984 [2024-11-20 22:42:48.483726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.984 [2024-11-20 22:42:48.483888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-20 22:42:48.483910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.984 [2024-11-20 22:42:48.488007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.984 [2024-11-20 22:42:48.488149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-20 22:42:48.488171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.984 [2024-11-20 22:42:48.492330] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.984 [2024-11-20 22:42:48.492483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-20 22:42:48.492506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.984 [2024-11-20 22:42:48.496691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.984 [2024-11-20 22:42:48.496855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-20 22:42:48.496878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.984 [2024-11-20 22:42:48.500932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.984 [2024-11-20 22:42:48.501047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-20 22:42:48.501077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.984 [2024-11-20 22:42:48.505178] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.984 [2024-11-20 22:42:48.505289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-20 22:42:48.505312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.984 [2024-11-20 22:42:48.509619] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.984 [2024-11-20 22:42:48.509832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-20 22:42:48.509862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.984 [2024-11-20 22:42:48.513903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.514005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.514027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.518269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.518448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.518472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.522537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.522672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.522695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.526800] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.526904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.526927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.531099] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.531301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.531324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.535444] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.535561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.535584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.539710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.539810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.539832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.543956] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.544101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.544124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.548218] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.548332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.548356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.552513] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.552697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.552719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.556867] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.557018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.557040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.561158] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.561308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.561331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.565475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.565661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.565693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.569967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.570068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.570090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.574272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.574426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.574448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.578516] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.578671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.578694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.582865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.582972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.582994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.587163] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.587306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.587329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.591482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.591606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.591630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.595648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.595834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.595856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.599946] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.600092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.600115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.604316] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.604410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.604433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.608578] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.608702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.608724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.612819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.612954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.612978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.617035] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.617122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.617145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.621256] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.621422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.621445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.625485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.985 [2024-11-20 22:42:48.625636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.985 [2024-11-20 22:42:48.625658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.985 [2024-11-20 22:42:48.629722] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.986 [2024-11-20 22:42:48.629830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.986 [2024-11-20 22:42:48.629853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.986 [2024-11-20 22:42:48.633927] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.986 [2024-11-20 22:42:48.634072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.986 [2024-11-20 22:42:48.634095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.986 [2024-11-20 22:42:48.638317] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.986 [2024-11-20 22:42:48.638481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.986 [2024-11-20 22:42:48.638504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.986 [2024-11-20 22:42:48.642567] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.986 [2024-11-20 22:42:48.642733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.986 [2024-11-20 22:42:48.642756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.986 [2024-11-20 22:42:48.646916] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.986 [2024-11-20 22:42:48.647057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.986 [2024-11-20 22:42:48.647080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.986 [2024-11-20 22:42:48.651141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.986 [2024-11-20 22:42:48.651318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.986 [2024-11-20 22:42:48.651341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.986 [2024-11-20 22:42:48.655512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.986 [2024-11-20 22:42:48.655659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.986 [2024-11-20 22:42:48.655682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.986 [2024-11-20 22:42:48.659780] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.986 [2024-11-20 22:42:48.659915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.986 [2024-11-20 22:42:48.659937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.986 [2024-11-20 22:42:48.664002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.986 [2024-11-20 22:42:48.664127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.986 [2024-11-20 22:42:48.664150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.986 [2024-11-20 22:42:48.668259] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.986 [2024-11-20 22:42:48.668411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.986 [2024-11-20 22:42:48.668434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.986 [2024-11-20 22:42:48.672527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.986 [2024-11-20 22:42:48.672642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.986 [2024-11-20 22:42:48.672664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.986 [2024-11-20 22:42:48.676849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.986 [2024-11-20 22:42:48.676965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.986 [2024-11-20 22:42:48.676988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.986 [2024-11-20 22:42:48.681208] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.986 [2024-11-20 22:42:48.681366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.986 [2024-11-20 22:42:48.681389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.986 [2024-11-20 22:42:48.685421] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.986 [2024-11-20 22:42:48.685586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.986 [2024-11-20 22:42:48.685608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.986 [2024-11-20 22:42:48.689603] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.986 [2024-11-20 22:42:48.689769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.986 [2024-11-20 22:42:48.689791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.986 [2024-11-20 22:42:48.693879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.986 [2024-11-20 22:42:48.694019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.986 [2024-11-20 22:42:48.694041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.986 [2024-11-20 22:42:48.698089] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.986 [2024-11-20 22:42:48.698219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.986 [2024-11-20 22:42:48.698241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.986 [2024-11-20 22:42:48.702449] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.986 [2024-11-20 22:42:48.702635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.986 [2024-11-20 22:42:48.702669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.986 [2024-11-20 22:42:48.706712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.986 [2024-11-20 22:42:48.706824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.986 [2024-11-20 22:42:48.706846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.986 [2024-11-20 22:42:48.711099] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:47.986 [2024-11-20 22:42:48.711222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.986 [2024-11-20 22:42:48.711245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.246 [2024-11-20 22:42:48.715792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:48.246 [2024-11-20 22:42:48.715951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.246 [2024-11-20 22:42:48.715974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.246 [2024-11-20 22:42:48.720195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:48.246 [2024-11-20 22:42:48.720351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.246 [2024-11-20 22:42:48.720373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.246 [2024-11-20 22:42:48.724620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:48.246 [2024-11-20 22:42:48.724798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.246 [2024-11-20 22:42:48.724821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.246 [2024-11-20 22:42:48.728886] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22f2ba0) with pdu=0x2000190fef90 00:22:48.246 [2024-11-20 22:42:48.729019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.246 [2024-11-20 22:42:48.729042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.246 00:22:48.246 Latency(us) 00:22:48.246 [2024-11-20T22:42:48.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.246 [2024-11-20T22:42:48.980Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:48.246 nvme0n1 : 2.00 7089.18 886.15 0.00 0.00 2251.79 1757.56 8638.84 00:22:48.246 [2024-11-20T22:42:48.980Z] =================================================================================================================== 00:22:48.246 [2024-11-20T22:42:48.980Z] Total : 7089.18 886.15 0.00 0.00 2251.79 1757.56 8638.84 00:22:48.246 0 00:22:48.246 22:42:48 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:48.246 22:42:48 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:48.246 22:42:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:48.246 22:42:48 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:48.246 | .driver_specific 00:22:48.246 | .nvme_error 00:22:48.246 | .status_code 00:22:48.246 | .command_transient_transport_error' 00:22:48.505 22:42:49 -- host/digest.sh@71 -- # (( 457 > 0 )) 00:22:48.505 22:42:49 -- host/digest.sh@73 -- # killprocess 97823 00:22:48.505 22:42:49 -- common/autotest_common.sh@936 -- # '[' -z 97823 ']' 00:22:48.505 22:42:49 -- common/autotest_common.sh@940 -- # kill -0 97823 00:22:48.505 22:42:49 -- common/autotest_common.sh@941 -- # uname 00:22:48.505 22:42:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:48.505 22:42:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97823 00:22:48.505 22:42:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:48.505 22:42:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:48.505 22:42:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97823' 00:22:48.505 killing process with pid 97823 00:22:48.505 Received shutdown signal, test time was about 2.000000 seconds 00:22:48.505 00:22:48.505 Latency(us) 00:22:48.505 [2024-11-20T22:42:49.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.505 [2024-11-20T22:42:49.239Z] =================================================================================================================== 00:22:48.505 [2024-11-20T22:42:49.239Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:48.505 22:42:49 -- common/autotest_common.sh@955 -- # kill 97823 00:22:48.505 22:42:49 -- common/autotest_common.sh@960 -- # wait 97823 00:22:48.763 22:42:49 -- host/digest.sh@115 -- # killprocess 97527 00:22:48.763 22:42:49 -- common/autotest_common.sh@936 -- # '[' -z 97527 ']' 00:22:48.763 22:42:49 -- common/autotest_common.sh@940 -- # kill -0 97527 00:22:48.763 22:42:49 -- common/autotest_common.sh@941 -- # uname 00:22:48.763 22:42:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:48.763 22:42:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97527 00:22:48.763 22:42:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:48.763 killing process with pid 97527 00:22:48.764 22:42:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:48.764 22:42:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97527' 00:22:48.764 22:42:49 -- common/autotest_common.sh@955 -- # kill 97527 00:22:48.764 22:42:49 -- common/autotest_common.sh@960 -- # wait 97527 00:22:49.022 00:22:49.022 real 0m17.623s 00:22:49.022 user 0m32.652s 00:22:49.022 sys 0m5.554s 00:22:49.022 22:42:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:49.022 22:42:49 -- common/autotest_common.sh@10 -- # set +x 00:22:49.022 ************************************ 00:22:49.022 END TEST nvmf_digest_error 00:22:49.022 ************************************ 00:22:49.022 22:42:49 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:22:49.022 22:42:49 -- host/digest.sh@139 -- # nvmftestfini 00:22:49.022 22:42:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:49.022 22:42:49 -- nvmf/common.sh@116 -- # sync 00:22:49.022 22:42:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:49.022 22:42:49 -- nvmf/common.sh@119 -- # set +e 00:22:49.022 22:42:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:49.022 22:42:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:49.022 rmmod nvme_tcp 00:22:49.022 rmmod nvme_fabrics 00:22:49.022 rmmod nvme_keyring 00:22:49.022 22:42:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:49.022 22:42:49 -- nvmf/common.sh@123 -- # set -e 00:22:49.022 22:42:49 -- nvmf/common.sh@124 -- # return 0 00:22:49.022 22:42:49 -- nvmf/common.sh@477 -- # '[' -n 97527 ']' 00:22:49.022 22:42:49 -- nvmf/common.sh@478 -- # killprocess 97527 00:22:49.022 22:42:49 -- common/autotest_common.sh@936 -- # '[' -z 97527 ']' 00:22:49.022 Process with pid 97527 is not found 00:22:49.022 22:42:49 -- common/autotest_common.sh@940 -- # kill -0 97527 00:22:49.022 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (97527) - No such process 00:22:49.022 22:42:49 -- common/autotest_common.sh@963 -- # echo 'Process with pid 97527 is not found' 00:22:49.022 22:42:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:49.022 22:42:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:49.022 22:42:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:49.022 22:42:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:49.022 22:42:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:49.022 22:42:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.022 22:42:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:49.022 22:42:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.022 22:42:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:49.022 00:22:49.022 real 0m35.640s 00:22:49.022 user 1m3.815s 00:22:49.022 sys 0m11.351s 00:22:49.022 22:42:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:49.022 ************************************ 00:22:49.022 END TEST nvmf_digest 00:22:49.022 ************************************ 00:22:49.022 22:42:49 -- common/autotest_common.sh@10 -- # set +x 00:22:49.282 22:42:49 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:22:49.282 22:42:49 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:22:49.282 22:42:49 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:22:49.282 22:42:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:49.282 22:42:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:49.282 22:42:49 -- common/autotest_common.sh@10 -- # set +x 00:22:49.282 ************************************ 00:22:49.282 START TEST nvmf_mdns_discovery 00:22:49.282 ************************************ 00:22:49.282 22:42:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:22:49.282 * Looking for test storage... 00:22:49.282 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:49.282 22:42:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:49.282 22:42:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:49.282 22:42:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:49.282 22:42:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:49.282 22:42:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:49.282 22:42:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:49.282 22:42:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:49.282 22:42:49 -- scripts/common.sh@335 -- # IFS=.-: 00:22:49.282 22:42:49 -- scripts/common.sh@335 -- # read -ra ver1 00:22:49.282 22:42:49 -- scripts/common.sh@336 -- # IFS=.-: 00:22:49.282 22:42:49 -- scripts/common.sh@336 -- # read -ra ver2 00:22:49.282 22:42:49 -- scripts/common.sh@337 -- # local 'op=<' 00:22:49.282 22:42:49 -- scripts/common.sh@339 -- # ver1_l=2 00:22:49.282 22:42:49 -- scripts/common.sh@340 -- # ver2_l=1 00:22:49.282 22:42:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:49.282 22:42:49 -- scripts/common.sh@343 -- # case "$op" in 00:22:49.282 22:42:49 -- scripts/common.sh@344 -- # : 1 00:22:49.282 22:42:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:49.282 22:42:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:49.282 22:42:49 -- scripts/common.sh@364 -- # decimal 1 00:22:49.282 22:42:49 -- scripts/common.sh@352 -- # local d=1 00:22:49.282 22:42:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:49.282 22:42:49 -- scripts/common.sh@354 -- # echo 1 00:22:49.282 22:42:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:49.282 22:42:49 -- scripts/common.sh@365 -- # decimal 2 00:22:49.282 22:42:49 -- scripts/common.sh@352 -- # local d=2 00:22:49.282 22:42:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:49.282 22:42:49 -- scripts/common.sh@354 -- # echo 2 00:22:49.282 22:42:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:49.282 22:42:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:49.282 22:42:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:49.282 22:42:49 -- scripts/common.sh@367 -- # return 0 00:22:49.282 22:42:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:49.282 22:42:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:49.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.282 --rc genhtml_branch_coverage=1 00:22:49.282 --rc genhtml_function_coverage=1 00:22:49.282 --rc genhtml_legend=1 00:22:49.282 --rc geninfo_all_blocks=1 00:22:49.282 --rc geninfo_unexecuted_blocks=1 00:22:49.282 00:22:49.282 ' 00:22:49.282 22:42:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:49.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.282 --rc genhtml_branch_coverage=1 00:22:49.282 --rc genhtml_function_coverage=1 00:22:49.282 --rc genhtml_legend=1 00:22:49.282 --rc geninfo_all_blocks=1 00:22:49.282 --rc geninfo_unexecuted_blocks=1 00:22:49.282 00:22:49.282 ' 00:22:49.282 22:42:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:49.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.282 --rc genhtml_branch_coverage=1 00:22:49.282 --rc genhtml_function_coverage=1 00:22:49.282 --rc genhtml_legend=1 00:22:49.282 --rc geninfo_all_blocks=1 00:22:49.282 --rc geninfo_unexecuted_blocks=1 00:22:49.282 00:22:49.282 ' 00:22:49.282 22:42:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:49.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.282 --rc genhtml_branch_coverage=1 00:22:49.282 --rc genhtml_function_coverage=1 00:22:49.282 --rc genhtml_legend=1 00:22:49.282 --rc geninfo_all_blocks=1 00:22:49.282 --rc geninfo_unexecuted_blocks=1 00:22:49.282 00:22:49.282 ' 00:22:49.282 22:42:49 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:49.282 22:42:49 -- nvmf/common.sh@7 -- # uname -s 00:22:49.282 22:42:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.282 22:42:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.282 22:42:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.282 22:42:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.282 22:42:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.282 22:42:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.282 22:42:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.282 22:42:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.282 22:42:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.282 22:42:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.282 22:42:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:22:49.282 22:42:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:22:49.282 22:42:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.282 22:42:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.282 22:42:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:49.282 22:42:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:49.282 22:42:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.282 22:42:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.282 22:42:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.282 22:42:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.282 22:42:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.282 22:42:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.282 22:42:49 -- paths/export.sh@5 -- # export PATH 00:22:49.283 22:42:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.283 22:42:49 -- nvmf/common.sh@46 -- # : 0 00:22:49.283 22:42:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:49.283 22:42:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:49.283 22:42:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:49.283 22:42:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.283 22:42:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.283 22:42:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:49.283 22:42:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:49.283 22:42:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:49.283 22:42:49 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:22:49.283 22:42:49 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:22:49.283 22:42:49 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:49.283 22:42:49 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:49.283 22:42:49 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:22:49.283 22:42:49 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:49.283 22:42:49 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:22:49.283 22:42:49 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:22:49.283 22:42:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:49.283 22:42:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.283 22:42:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:49.283 22:42:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:49.283 22:42:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:49.283 22:42:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.283 22:42:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:49.283 22:42:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.283 22:42:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:49.283 22:42:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:49.283 22:42:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:49.283 22:42:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:49.283 22:42:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:49.283 22:42:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:49.283 22:42:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.283 22:42:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.283 22:42:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:49.283 22:42:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:49.283 22:42:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:49.283 22:42:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:49.283 22:42:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:49.283 22:42:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.283 22:42:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:49.283 22:42:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:49.283 22:42:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:49.283 22:42:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:49.283 22:42:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:49.542 22:42:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:49.542 Cannot find device "nvmf_tgt_br" 00:22:49.542 22:42:50 -- nvmf/common.sh@154 -- # true 00:22:49.542 22:42:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:49.542 Cannot find device "nvmf_tgt_br2" 00:22:49.542 22:42:50 -- nvmf/common.sh@155 -- # true 00:22:49.542 22:42:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:49.542 22:42:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:49.542 Cannot find device "nvmf_tgt_br" 00:22:49.542 22:42:50 -- nvmf/common.sh@157 -- # true 00:22:49.542 22:42:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:49.542 Cannot find device "nvmf_tgt_br2" 00:22:49.542 22:42:50 -- nvmf/common.sh@158 -- # true 00:22:49.542 22:42:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:49.542 22:42:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:49.542 22:42:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:49.542 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:49.542 22:42:50 -- nvmf/common.sh@161 -- # true 00:22:49.542 22:42:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:49.542 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:49.542 22:42:50 -- nvmf/common.sh@162 -- # true 00:22:49.542 22:42:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:49.542 22:42:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:49.542 22:42:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:49.542 22:42:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:49.542 22:42:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:49.542 22:42:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:49.542 22:42:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:49.542 22:42:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:49.542 22:42:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:49.542 22:42:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:49.542 22:42:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:49.542 22:42:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:49.542 22:42:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:49.542 22:42:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:49.542 22:42:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:49.542 22:42:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:49.542 22:42:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:49.542 22:42:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:49.542 22:42:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:49.801 22:42:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:49.801 22:42:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:49.801 22:42:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:49.801 22:42:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:49.801 22:42:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:49.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:22:49.801 00:22:49.801 --- 10.0.0.2 ping statistics --- 00:22:49.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.801 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:22:49.801 22:42:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:49.801 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:49.801 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:22:49.801 00:22:49.801 --- 10.0.0.3 ping statistics --- 00:22:49.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.801 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:22:49.801 22:42:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:49.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:22:49.801 00:22:49.801 --- 10.0.0.1 ping statistics --- 00:22:49.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.801 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:22:49.801 22:42:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.801 22:42:50 -- nvmf/common.sh@421 -- # return 0 00:22:49.801 22:42:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:49.801 22:42:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.801 22:42:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:49.801 22:42:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:49.801 22:42:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.801 22:42:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:49.801 22:42:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:49.801 22:42:50 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:49.801 22:42:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:49.802 22:42:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:49.802 22:42:50 -- common/autotest_common.sh@10 -- # set +x 00:22:49.802 22:42:50 -- nvmf/common.sh@469 -- # nvmfpid=98133 00:22:49.802 22:42:50 -- nvmf/common.sh@470 -- # waitforlisten 98133 00:22:49.802 22:42:50 -- common/autotest_common.sh@829 -- # '[' -z 98133 ']' 00:22:49.802 22:42:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:49.802 22:42:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.802 22:42:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:49.802 22:42:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.802 22:42:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:49.802 22:42:50 -- common/autotest_common.sh@10 -- # set +x 00:22:49.802 [2024-11-20 22:42:50.421774] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:49.802 [2024-11-20 22:42:50.421883] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.060 [2024-11-20 22:42:50.562827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.060 [2024-11-20 22:42:50.644251] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:50.060 [2024-11-20 22:42:50.644494] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.060 [2024-11-20 22:42:50.644517] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.060 [2024-11-20 22:42:50.644531] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.060 [2024-11-20 22:42:50.644580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.627 22:42:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:50.627 22:42:51 -- common/autotest_common.sh@862 -- # return 0 00:22:50.627 22:42:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:50.627 22:42:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:50.627 22:42:51 -- common/autotest_common.sh@10 -- # set +x 00:22:50.627 22:42:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.627 22:42:51 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:22:50.627 22:42:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.627 22:42:51 -- common/autotest_common.sh@10 -- # set +x 00:22:50.627 22:42:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.627 22:42:51 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:22:50.627 22:42:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.627 22:42:51 -- common/autotest_common.sh@10 -- # set +x 00:22:50.886 22:42:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.886 22:42:51 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:50.886 22:42:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.886 22:42:51 -- common/autotest_common.sh@10 -- # set +x 00:22:50.886 [2024-11-20 22:42:51.467293] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.886 22:42:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.886 22:42:51 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:50.886 22:42:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.886 22:42:51 -- common/autotest_common.sh@10 -- # set +x 00:22:50.886 [2024-11-20 22:42:51.475455] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:50.886 22:42:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.886 22:42:51 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:50.886 22:42:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.886 22:42:51 -- common/autotest_common.sh@10 -- # set +x 00:22:50.886 null0 00:22:50.886 22:42:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.886 22:42:51 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:50.886 22:42:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.886 22:42:51 -- common/autotest_common.sh@10 -- # set +x 00:22:50.886 null1 00:22:50.886 22:42:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.886 22:42:51 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:22:50.886 22:42:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.886 22:42:51 -- common/autotest_common.sh@10 -- # set +x 00:22:50.886 null2 00:22:50.886 22:42:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.886 22:42:51 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:22:50.886 22:42:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.886 22:42:51 -- common/autotest_common.sh@10 -- # set +x 00:22:50.886 null3 00:22:50.886 22:42:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.886 22:42:51 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:22:50.886 22:42:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.886 22:42:51 -- common/autotest_common.sh@10 -- # set +x 00:22:50.886 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:50.886 22:42:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.886 22:42:51 -- host/mdns_discovery.sh@47 -- # hostpid=98179 00:22:50.886 22:42:51 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:50.886 22:42:51 -- host/mdns_discovery.sh@48 -- # waitforlisten 98179 /tmp/host.sock 00:22:50.886 22:42:51 -- common/autotest_common.sh@829 -- # '[' -z 98179 ']' 00:22:50.887 22:42:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:50.887 22:42:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:50.887 22:42:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:50.887 22:42:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:50.887 22:42:51 -- common/autotest_common.sh@10 -- # set +x 00:22:50.887 [2024-11-20 22:42:51.574615] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:50.887 [2024-11-20 22:42:51.574881] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98179 ] 00:22:51.146 [2024-11-20 22:42:51.714826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.146 [2024-11-20 22:42:51.781901] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:51.146 [2024-11-20 22:42:51.782201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.081 22:42:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:52.082 22:42:52 -- common/autotest_common.sh@862 -- # return 0 00:22:52.082 22:42:52 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:22:52.082 22:42:52 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:22:52.082 22:42:52 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:22:52.082 22:42:52 -- host/mdns_discovery.sh@57 -- # avahipid=98209 00:22:52.082 22:42:52 -- host/mdns_discovery.sh@58 -- # sleep 1 00:22:52.082 22:42:52 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:22:52.082 22:42:52 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:22:52.082 Process 1067 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:22:52.082 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:22:52.082 Successfully dropped root privileges. 00:22:52.082 avahi-daemon 0.8 starting up. 00:22:53.018 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:22:53.018 Successfully called chroot(). 00:22:53.018 Successfully dropped remaining capabilities. 00:22:53.018 No service file found in /etc/avahi/services. 00:22:53.019 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:22:53.019 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:22:53.019 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:22:53.019 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:22:53.019 Network interface enumeration completed. 00:22:53.019 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:22:53.019 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:22:53.019 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:22:53.019 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:22:53.019 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 2126719378. 00:22:53.019 22:42:53 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:53.019 22:42:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.019 22:42:53 -- common/autotest_common.sh@10 -- # set +x 00:22:53.277 22:42:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:53.278 22:42:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.278 22:42:53 -- common/autotest_common.sh@10 -- # set +x 00:22:53.278 22:42:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@68 -- # sort 00:22:53.278 22:42:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.278 22:42:53 -- common/autotest_common.sh@10 -- # set +x 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@68 -- # xargs 00:22:53.278 22:42:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.278 22:42:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:53.278 22:42:53 -- common/autotest_common.sh@10 -- # set +x 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@64 -- # sort 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@64 -- # xargs 00:22:53.278 22:42:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:53.278 22:42:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.278 22:42:53 -- common/autotest_common.sh@10 -- # set +x 00:22:53.278 22:42:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:53.278 22:42:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@68 -- # sort 00:22:53.278 22:42:53 -- common/autotest_common.sh@10 -- # set +x 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@68 -- # xargs 00:22:53.278 22:42:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:53.278 22:42:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@64 -- # sort 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@64 -- # xargs 00:22:53.278 22:42:53 -- common/autotest_common.sh@10 -- # set +x 00:22:53.278 22:42:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:22:53.278 22:42:53 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:53.278 22:42:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.278 22:42:53 -- common/autotest_common.sh@10 -- # set +x 00:22:53.278 22:42:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.278 22:42:54 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:22:53.536 22:42:54 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:53.536 22:42:54 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:53.536 22:42:54 -- host/mdns_discovery.sh@68 -- # sort 00:22:53.536 22:42:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.536 22:42:54 -- common/autotest_common.sh@10 -- # set +x 00:22:53.536 22:42:54 -- host/mdns_discovery.sh@68 -- # xargs 00:22:53.536 22:42:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.536 [2024-11-20 22:42:54.057052] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:22:53.536 22:42:54 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:22:53.536 22:42:54 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:22:53.536 22:42:54 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.536 22:42:54 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:53.536 22:42:54 -- host/mdns_discovery.sh@64 -- # xargs 00:22:53.536 22:42:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.536 22:42:54 -- common/autotest_common.sh@10 -- # set +x 00:22:53.536 22:42:54 -- host/mdns_discovery.sh@64 -- # sort 00:22:53.536 22:42:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.536 22:42:54 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:22:53.536 22:42:54 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:53.536 22:42:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.536 22:42:54 -- common/autotest_common.sh@10 -- # set +x 00:22:53.536 [2024-11-20 22:42:54.124931] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.536 22:42:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.536 22:42:54 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:53.536 22:42:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.536 22:42:54 -- common/autotest_common.sh@10 -- # set +x 00:22:53.536 22:42:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.536 22:42:54 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:22:53.536 22:42:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.536 22:42:54 -- common/autotest_common.sh@10 -- # set +x 00:22:53.536 22:42:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.536 22:42:54 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:22:53.536 22:42:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.536 22:42:54 -- common/autotest_common.sh@10 -- # set +x 00:22:53.536 22:42:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.536 22:42:54 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:22:53.536 22:42:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.536 22:42:54 -- common/autotest_common.sh@10 -- # set +x 00:22:53.536 22:42:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.536 22:42:54 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:22:53.536 22:42:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.536 22:42:54 -- common/autotest_common.sh@10 -- # set +x 00:22:53.536 [2024-11-20 22:42:54.164733] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:22:53.536 22:42:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.536 22:42:54 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:22:53.536 22:42:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.536 22:42:54 -- common/autotest_common.sh@10 -- # set +x 00:22:53.536 [2024-11-20 22:42:54.172728] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:53.536 22:42:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.536 22:42:54 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=98270 00:22:53.536 22:42:54 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:22:53.536 22:42:54 -- host/mdns_discovery.sh@125 -- # sleep 5 00:22:54.471 [2024-11-20 22:42:54.957052] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:22:54.471 Established under name 'CDC' 00:22:54.729 [2024-11-20 22:42:55.357064] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:54.729 [2024-11-20 22:42:55.357103] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:22:54.729 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:54.729 cookie is 0 00:22:54.729 is_local: 1 00:22:54.729 our_own: 0 00:22:54.729 wide_area: 0 00:22:54.729 multicast: 1 00:22:54.729 cached: 1 00:22:54.729 [2024-11-20 22:42:55.457073] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:54.729 [2024-11-20 22:42:55.457096] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:22:54.729 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:54.729 cookie is 0 00:22:54.729 is_local: 1 00:22:54.729 our_own: 0 00:22:54.729 wide_area: 0 00:22:54.729 multicast: 1 00:22:54.729 cached: 1 00:22:55.666 [2024-11-20 22:42:56.361273] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:55.666 [2024-11-20 22:42:56.361309] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:55.666 [2024-11-20 22:42:56.361326] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:55.924 [2024-11-20 22:42:56.448379] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:22:55.924 [2024-11-20 22:42:56.461024] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:55.924 [2024-11-20 22:42:56.461047] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:55.924 [2024-11-20 22:42:56.461078] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:55.924 [2024-11-20 22:42:56.505418] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:55.924 [2024-11-20 22:42:56.505446] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:55.924 [2024-11-20 22:42:56.548670] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:22:55.924 [2024-11-20 22:42:56.610308] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:55.924 [2024-11-20 22:42:56.610332] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:58.457 22:42:59 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:22:58.457 22:42:59 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:22:58.457 22:42:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.457 22:42:59 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:22:58.457 22:42:59 -- common/autotest_common.sh@10 -- # set +x 00:22:58.457 22:42:59 -- host/mdns_discovery.sh@80 -- # sort 00:22:58.457 22:42:59 -- host/mdns_discovery.sh@80 -- # xargs 00:22:58.715 22:42:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.715 22:42:59 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:22:58.715 22:42:59 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:22:58.715 22:42:59 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:58.715 22:42:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.715 22:42:59 -- common/autotest_common.sh@10 -- # set +x 00:22:58.715 22:42:59 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:22:58.715 22:42:59 -- host/mdns_discovery.sh@76 -- # sort 00:22:58.715 22:42:59 -- host/mdns_discovery.sh@76 -- # xargs 00:22:58.715 22:42:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.715 22:42:59 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:22:58.715 22:42:59 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:22:58.715 22:42:59 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:58.715 22:42:59 -- host/mdns_discovery.sh@68 -- # sort 00:22:58.715 22:42:59 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:58.715 22:42:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.715 22:42:59 -- common/autotest_common.sh@10 -- # set +x 00:22:58.715 22:42:59 -- host/mdns_discovery.sh@68 -- # xargs 00:22:58.715 22:42:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.715 22:42:59 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:58.715 22:42:59 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:22:58.715 22:42:59 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:58.715 22:42:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.715 22:42:59 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:58.715 22:42:59 -- common/autotest_common.sh@10 -- # set +x 00:22:58.715 22:42:59 -- host/mdns_discovery.sh@64 -- # sort 00:22:58.715 22:42:59 -- host/mdns_discovery.sh@64 -- # xargs 00:22:58.715 22:42:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.715 22:42:59 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:22:58.715 22:42:59 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:22:58.715 22:42:59 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:58.715 22:42:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.715 22:42:59 -- common/autotest_common.sh@10 -- # set +x 00:22:58.715 22:42:59 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:58.715 22:42:59 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:58.715 22:42:59 -- host/mdns_discovery.sh@72 -- # xargs 00:22:58.715 22:42:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.972 22:42:59 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:22:58.972 22:42:59 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:22:58.972 22:42:59 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:58.972 22:42:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.972 22:42:59 -- common/autotest_common.sh@10 -- # set +x 00:22:58.972 22:42:59 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:58.972 22:42:59 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:58.972 22:42:59 -- host/mdns_discovery.sh@72 -- # xargs 00:22:58.972 22:42:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.972 22:42:59 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:22:58.972 22:42:59 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:22:58.972 22:42:59 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:58.972 22:42:59 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:58.972 22:42:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.972 22:42:59 -- common/autotest_common.sh@10 -- # set +x 00:22:58.972 22:42:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.972 22:42:59 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:22:58.972 22:42:59 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:22:58.972 22:42:59 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:22:58.972 22:42:59 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:58.972 22:42:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.972 22:42:59 -- common/autotest_common.sh@10 -- # set +x 00:22:58.972 22:42:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.972 22:42:59 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:22:58.972 22:42:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.972 22:42:59 -- common/autotest_common.sh@10 -- # set +x 00:22:58.972 22:42:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.972 22:42:59 -- host/mdns_discovery.sh@139 -- # sleep 1 00:22:59.921 22:43:00 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:22:59.921 22:43:00 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.921 22:43:00 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:59.921 22:43:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.921 22:43:00 -- common/autotest_common.sh@10 -- # set +x 00:22:59.921 22:43:00 -- host/mdns_discovery.sh@64 -- # sort 00:22:59.921 22:43:00 -- host/mdns_discovery.sh@64 -- # xargs 00:22:59.921 22:43:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.921 22:43:00 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:59.921 22:43:00 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:22:59.921 22:43:00 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:59.921 22:43:00 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:59.921 22:43:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.921 22:43:00 -- common/autotest_common.sh@10 -- # set +x 00:22:59.921 22:43:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.183 22:43:00 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:00.183 22:43:00 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:00.183 22:43:00 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:23:00.183 22:43:00 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:00.183 22:43:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.183 22:43:00 -- common/autotest_common.sh@10 -- # set +x 00:23:00.183 [2024-11-20 22:43:00.683139] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:00.183 [2024-11-20 22:43:00.684333] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:00.183 [2024-11-20 22:43:00.684361] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:00.183 [2024-11-20 22:43:00.684394] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:00.183 [2024-11-20 22:43:00.684407] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:00.183 22:43:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.183 22:43:00 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:23:00.183 22:43:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.183 22:43:00 -- common/autotest_common.sh@10 -- # set +x 00:23:00.183 [2024-11-20 22:43:00.691027] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:00.183 [2024-11-20 22:43:00.691351] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:00.183 [2024-11-20 22:43:00.691402] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:00.183 22:43:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.183 22:43:00 -- host/mdns_discovery.sh@149 -- # sleep 1 00:23:00.183 [2024-11-20 22:43:00.822444] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:23:00.183 [2024-11-20 22:43:00.823439] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:23:00.183 [2024-11-20 22:43:00.884650] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:00.183 [2024-11-20 22:43:00.884676] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:00.183 [2024-11-20 22:43:00.884683] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:00.183 [2024-11-20 22:43:00.884699] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:00.183 [2024-11-20 22:43:00.884736] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:00.183 [2024-11-20 22:43:00.884745] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:00.183 [2024-11-20 22:43:00.884750] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:00.183 [2024-11-20 22:43:00.884762] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:00.439 [2024-11-20 22:43:00.930537] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:00.439 [2024-11-20 22:43:00.930557] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:00.439 [2024-11-20 22:43:00.930614] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:00.439 [2024-11-20 22:43:00.930622] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:01.006 22:43:01 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:23:01.006 22:43:01 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:01.006 22:43:01 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:01.006 22:43:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.006 22:43:01 -- common/autotest_common.sh@10 -- # set +x 00:23:01.006 22:43:01 -- host/mdns_discovery.sh@68 -- # sort 00:23:01.006 22:43:01 -- host/mdns_discovery.sh@68 -- # xargs 00:23:01.006 22:43:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@64 -- # xargs 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:01.265 22:43:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.265 22:43:01 -- common/autotest_common.sh@10 -- # set +x 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@64 -- # sort 00:23:01.265 22:43:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:01.265 22:43:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.265 22:43:01 -- common/autotest_common.sh@10 -- # set +x 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@72 -- # xargs 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:01.265 22:43:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:01.265 22:43:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.265 22:43:01 -- common/autotest_common.sh@10 -- # set +x 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@72 -- # xargs 00:23:01.265 22:43:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:01.265 22:43:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.265 22:43:01 -- common/autotest_common.sh@10 -- # set +x 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:01.265 22:43:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:01.265 22:43:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.265 22:43:01 -- common/autotest_common.sh@10 -- # set +x 00:23:01.265 [2024-11-20 22:43:01.979950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.265 [2024-11-20 22:43:01.979986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.265 [2024-11-20 22:43:01.980015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.265 [2024-11-20 22:43:01.980024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.265 [2024-11-20 22:43:01.980032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.265 [2024-11-20 22:43:01.980040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.265 [2024-11-20 22:43:01.980048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.265 [2024-11-20 22:43:01.980056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.265 [2024-11-20 22:43:01.980064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176d00 is same with the state(5) to be set 00:23:01.265 [2024-11-20 22:43:01.980120] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:01.265 [2024-11-20 22:43:01.980137] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:01.265 [2024-11-20 22:43:01.980165] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:01.265 [2024-11-20 22:43:01.980176] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:01.265 22:43:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.265 22:43:01 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:01.265 22:43:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.265 22:43:01 -- common/autotest_common.sh@10 -- # set +x 00:23:01.266 [2024-11-20 22:43:01.988132] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:01.266 [2024-11-20 22:43:01.988205] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:01.266 [2024-11-20 22:43:01.989908] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2176d00 (9): Bad file descriptor 00:23:01.266 22:43:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.266 22:43:01 -- host/mdns_discovery.sh@162 -- # sleep 1 00:23:01.266 [2024-11-20 22:43:01.994034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.266 [2024-11-20 22:43:01.994070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.266 [2024-11-20 22:43:01.994083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.266 [2024-11-20 22:43:01.994093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.266 [2024-11-20 22:43:01.994104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.266 [2024-11-20 22:43:01.994113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.266 [2024-11-20 22:43:01.994123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.266 [2024-11-20 22:43:01.994132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.266 [2024-11-20 22:43:01.994155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124070 is same with the state(5) to be set 00:23:01.526 [2024-11-20 22:43:01.999940] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.526 [2024-11-20 22:43:02.000066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.526 [2024-11-20 22:43:02.000114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.526 [2024-11-20 22:43:02.000131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2176d00 with addr=10.0.0.2, port=4420 00:23:01.526 [2024-11-20 22:43:02.000141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176d00 is same with the state(5) to be set 00:23:01.526 [2024-11-20 22:43:02.000157] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2176d00 (9): Bad file descriptor 00:23:01.526 [2024-11-20 22:43:02.000187] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:01.526 [2024-11-20 22:43:02.000212] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:01.526 [2024-11-20 22:43:02.000222] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:01.526 [2024-11-20 22:43:02.000255] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.526 [2024-11-20 22:43:02.003994] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2124070 (9): Bad file descriptor 00:23:01.526 [2024-11-20 22:43:02.010023] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.526 [2024-11-20 22:43:02.010172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.526 [2024-11-20 22:43:02.010218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.526 [2024-11-20 22:43:02.010248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2176d00 with addr=10.0.0.2, port=4420 00:23:01.526 [2024-11-20 22:43:02.010257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176d00 is same with the state(5) to be set 00:23:01.526 [2024-11-20 22:43:02.010272] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2176d00 (9): Bad file descriptor 00:23:01.526 [2024-11-20 22:43:02.010301] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:01.526 [2024-11-20 22:43:02.010328] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:01.526 [2024-11-20 22:43:02.010352] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:01.526 [2024-11-20 22:43:02.010381] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.526 [2024-11-20 22:43:02.014005] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:01.526 [2024-11-20 22:43:02.014106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.526 [2024-11-20 22:43:02.014168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.526 [2024-11-20 22:43:02.014199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2124070 with addr=10.0.0.3, port=4420 00:23:01.526 [2024-11-20 22:43:02.014208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124070 is same with the state(5) to be set 00:23:01.526 [2024-11-20 22:43:02.014222] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2124070 (9): Bad file descriptor 00:23:01.527 [2024-11-20 22:43:02.014235] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:01.527 [2024-11-20 22:43:02.014243] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:01.527 [2024-11-20 22:43:02.014267] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:01.527 [2024-11-20 22:43:02.014282] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.527 [2024-11-20 22:43:02.020105] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.527 [2024-11-20 22:43:02.020194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.527 [2024-11-20 22:43:02.020237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.527 [2024-11-20 22:43:02.020252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2176d00 with addr=10.0.0.2, port=4420 00:23:01.527 [2024-11-20 22:43:02.020261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176d00 is same with the state(5) to be set 00:23:01.527 [2024-11-20 22:43:02.020275] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2176d00 (9): Bad file descriptor 00:23:01.527 [2024-11-20 22:43:02.020288] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:01.527 [2024-11-20 22:43:02.020308] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:01.527 [2024-11-20 22:43:02.020317] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:01.527 [2024-11-20 22:43:02.020346] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.527 [2024-11-20 22:43:02.024072] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:01.527 [2024-11-20 22:43:02.024160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.527 [2024-11-20 22:43:02.024204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.527 [2024-11-20 22:43:02.024219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2124070 with addr=10.0.0.3, port=4420 00:23:01.527 [2024-11-20 22:43:02.024228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124070 is same with the state(5) to be set 00:23:01.527 [2024-11-20 22:43:02.024242] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2124070 (9): Bad file descriptor 00:23:01.527 [2024-11-20 22:43:02.024255] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:01.527 [2024-11-20 22:43:02.024263] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:01.527 [2024-11-20 22:43:02.024270] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:01.527 [2024-11-20 22:43:02.024283] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.527 [2024-11-20 22:43:02.030181] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.527 [2024-11-20 22:43:02.030301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.527 [2024-11-20 22:43:02.030356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.527 [2024-11-20 22:43:02.030372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2176d00 with addr=10.0.0.2, port=4420 00:23:01.527 [2024-11-20 22:43:02.030381] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176d00 is same with the state(5) to be set 00:23:01.527 [2024-11-20 22:43:02.030395] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2176d00 (9): Bad file descriptor 00:23:01.527 [2024-11-20 22:43:02.030423] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:01.527 [2024-11-20 22:43:02.030434] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:01.527 [2024-11-20 22:43:02.030442] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:01.527 [2024-11-20 22:43:02.030471] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.527 [2024-11-20 22:43:02.034120] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:01.527 [2024-11-20 22:43:02.034278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.527 [2024-11-20 22:43:02.034333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.527 [2024-11-20 22:43:02.034351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2124070 with addr=10.0.0.3, port=4420 00:23:01.527 [2024-11-20 22:43:02.034361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124070 is same with the state(5) to be set 00:23:01.527 [2024-11-20 22:43:02.034376] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2124070 (9): Bad file descriptor 00:23:01.527 [2024-11-20 22:43:02.034389] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:01.527 [2024-11-20 22:43:02.034397] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:01.527 [2024-11-20 22:43:02.034405] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:01.527 [2024-11-20 22:43:02.034418] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.527 [2024-11-20 22:43:02.040259] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.527 [2024-11-20 22:43:02.040374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.527 [2024-11-20 22:43:02.040418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.527 [2024-11-20 22:43:02.040433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2176d00 with addr=10.0.0.2, port=4420 00:23:01.527 [2024-11-20 22:43:02.040442] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176d00 is same with the state(5) to be set 00:23:01.527 [2024-11-20 22:43:02.040457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2176d00 (9): Bad file descriptor 00:23:01.527 [2024-11-20 22:43:02.040486] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:01.527 [2024-11-20 22:43:02.040496] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:01.527 [2024-11-20 22:43:02.040504] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:01.527 [2024-11-20 22:43:02.040518] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.527 [2024-11-20 22:43:02.044216] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:01.527 [2024-11-20 22:43:02.044329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.527 [2024-11-20 22:43:02.044375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.527 [2024-11-20 22:43:02.044391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2124070 with addr=10.0.0.3, port=4420 00:23:01.527 [2024-11-20 22:43:02.044400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124070 is same with the state(5) to be set 00:23:01.527 [2024-11-20 22:43:02.044415] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2124070 (9): Bad file descriptor 00:23:01.527 [2024-11-20 22:43:02.044428] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:01.527 [2024-11-20 22:43:02.044436] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:01.527 [2024-11-20 22:43:02.044444] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:01.527 [2024-11-20 22:43:02.044457] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.527 [2024-11-20 22:43:02.050329] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.527 [2024-11-20 22:43:02.050417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.527 [2024-11-20 22:43:02.050459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.527 [2024-11-20 22:43:02.050473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2176d00 with addr=10.0.0.2, port=4420 00:23:01.527 [2024-11-20 22:43:02.050482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176d00 is same with the state(5) to be set 00:23:01.527 [2024-11-20 22:43:02.050496] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2176d00 (9): Bad file descriptor 00:23:01.527 [2024-11-20 22:43:02.050524] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:01.527 [2024-11-20 22:43:02.050534] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:01.527 [2024-11-20 22:43:02.050542] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:01.527 [2024-11-20 22:43:02.050555] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.527 [2024-11-20 22:43:02.054261] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:01.527 [2024-11-20 22:43:02.054387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.527 [2024-11-20 22:43:02.054429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.527 [2024-11-20 22:43:02.054444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2124070 with addr=10.0.0.3, port=4420 00:23:01.527 [2024-11-20 22:43:02.054452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124070 is same with the state(5) to be set 00:23:01.527 [2024-11-20 22:43:02.054466] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2124070 (9): Bad file descriptor 00:23:01.527 [2024-11-20 22:43:02.054479] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:01.527 [2024-11-20 22:43:02.054487] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:01.527 [2024-11-20 22:43:02.054510] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:01.527 [2024-11-20 22:43:02.054539] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.527 [2024-11-20 22:43:02.060373] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.527 [2024-11-20 22:43:02.060478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.527 [2024-11-20 22:43:02.060521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.527 [2024-11-20 22:43:02.060537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2176d00 with addr=10.0.0.2, port=4420 00:23:01.527 [2024-11-20 22:43:02.060546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176d00 is same with the state(5) to be set 00:23:01.527 [2024-11-20 22:43:02.060560] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2176d00 (9): Bad file descriptor 00:23:01.527 [2024-11-20 22:43:02.060588] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:01.527 [2024-11-20 22:43:02.060599] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:01.527 [2024-11-20 22:43:02.060607] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:01.528 [2024-11-20 22:43:02.060620] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.528 [2024-11-20 22:43:02.064345] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:01.528 [2024-11-20 22:43:02.064448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.528 [2024-11-20 22:43:02.064491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.528 [2024-11-20 22:43:02.064507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2124070 with addr=10.0.0.3, port=4420 00:23:01.528 [2024-11-20 22:43:02.064516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124070 is same with the state(5) to be set 00:23:01.528 [2024-11-20 22:43:02.064530] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2124070 (9): Bad file descriptor 00:23:01.528 [2024-11-20 22:43:02.064543] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:01.528 [2024-11-20 22:43:02.064551] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:01.528 [2024-11-20 22:43:02.064559] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:01.528 [2024-11-20 22:43:02.064587] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.528 [2024-11-20 22:43:02.070449] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.528 [2024-11-20 22:43:02.070537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.528 [2024-11-20 22:43:02.070579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.528 [2024-11-20 22:43:02.070594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2176d00 with addr=10.0.0.2, port=4420 00:23:01.528 [2024-11-20 22:43:02.070603] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176d00 is same with the state(5) to be set 00:23:01.528 [2024-11-20 22:43:02.070616] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2176d00 (9): Bad file descriptor 00:23:01.528 [2024-11-20 22:43:02.070644] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:01.528 [2024-11-20 22:43:02.070654] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:01.528 [2024-11-20 22:43:02.070662] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:01.528 [2024-11-20 22:43:02.070675] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.528 [2024-11-20 22:43:02.074405] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:01.528 [2024-11-20 22:43:02.074492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.528 [2024-11-20 22:43:02.074534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.528 [2024-11-20 22:43:02.074549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2124070 with addr=10.0.0.3, port=4420 00:23:01.528 [2024-11-20 22:43:02.074558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124070 is same with the state(5) to be set 00:23:01.528 [2024-11-20 22:43:02.074572] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2124070 (9): Bad file descriptor 00:23:01.528 [2024-11-20 22:43:02.074585] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:01.528 [2024-11-20 22:43:02.074593] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:01.528 [2024-11-20 22:43:02.074600] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:01.528 [2024-11-20 22:43:02.074613] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.528 [2024-11-20 22:43:02.080496] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.528 [2024-11-20 22:43:02.080607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.528 [2024-11-20 22:43:02.080651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.528 [2024-11-20 22:43:02.080666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2176d00 with addr=10.0.0.2, port=4420 00:23:01.528 [2024-11-20 22:43:02.080690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176d00 is same with the state(5) to be set 00:23:01.528 [2024-11-20 22:43:02.080705] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2176d00 (9): Bad file descriptor 00:23:01.528 [2024-11-20 22:43:02.080734] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:01.528 [2024-11-20 22:43:02.080744] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:01.528 [2024-11-20 22:43:02.080751] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:01.528 [2024-11-20 22:43:02.080780] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.528 [2024-11-20 22:43:02.084449] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:01.528 [2024-11-20 22:43:02.084554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.528 [2024-11-20 22:43:02.084597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.528 [2024-11-20 22:43:02.084613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2124070 with addr=10.0.0.3, port=4420 00:23:01.528 [2024-11-20 22:43:02.084623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124070 is same with the state(5) to be set 00:23:01.528 [2024-11-20 22:43:02.084637] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2124070 (9): Bad file descriptor 00:23:01.528 [2024-11-20 22:43:02.084650] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:01.528 [2024-11-20 22:43:02.084662] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:01.528 [2024-11-20 22:43:02.084670] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:01.528 [2024-11-20 22:43:02.084683] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.528 [2024-11-20 22:43:02.090561] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.528 [2024-11-20 22:43:02.090666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.528 [2024-11-20 22:43:02.090709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.528 [2024-11-20 22:43:02.090724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2176d00 with addr=10.0.0.2, port=4420 00:23:01.528 [2024-11-20 22:43:02.090734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176d00 is same with the state(5) to be set 00:23:01.528 [2024-11-20 22:43:02.090748] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2176d00 (9): Bad file descriptor 00:23:01.528 [2024-11-20 22:43:02.090776] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:01.528 [2024-11-20 22:43:02.090786] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:01.528 [2024-11-20 22:43:02.090794] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:01.528 [2024-11-20 22:43:02.090807] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.528 [2024-11-20 22:43:02.094527] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:01.528 [2024-11-20 22:43:02.094614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.528 [2024-11-20 22:43:02.094655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.528 [2024-11-20 22:43:02.094670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2124070 with addr=10.0.0.3, port=4420 00:23:01.528 [2024-11-20 22:43:02.094679] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124070 is same with the state(5) to be set 00:23:01.528 [2024-11-20 22:43:02.094693] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2124070 (9): Bad file descriptor 00:23:01.528 [2024-11-20 22:43:02.094706] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:01.528 [2024-11-20 22:43:02.094714] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:01.528 [2024-11-20 22:43:02.094721] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:01.528 [2024-11-20 22:43:02.094734] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.528 [2024-11-20 22:43:02.100639] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.528 [2024-11-20 22:43:02.100758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.528 [2024-11-20 22:43:02.100800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.528 [2024-11-20 22:43:02.100815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2176d00 with addr=10.0.0.2, port=4420 00:23:01.528 [2024-11-20 22:43:02.100824] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176d00 is same with the state(5) to be set 00:23:01.528 [2024-11-20 22:43:02.100838] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2176d00 (9): Bad file descriptor 00:23:01.528 [2024-11-20 22:43:02.100866] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:01.528 [2024-11-20 22:43:02.100875] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:01.528 [2024-11-20 22:43:02.100883] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:01.528 [2024-11-20 22:43:02.100896] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.528 [2024-11-20 22:43:02.104572] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:01.528 [2024-11-20 22:43:02.104661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.528 [2024-11-20 22:43:02.104718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.528 [2024-11-20 22:43:02.104733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2124070 with addr=10.0.0.3, port=4420 00:23:01.528 [2024-11-20 22:43:02.104742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124070 is same with the state(5) to be set 00:23:01.528 [2024-11-20 22:43:02.104756] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2124070 (9): Bad file descriptor 00:23:01.528 [2024-11-20 22:43:02.104768] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:01.528 [2024-11-20 22:43:02.104792] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:01.528 [2024-11-20 22:43:02.104816] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:01.528 [2024-11-20 22:43:02.104829] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.528 [2024-11-20 22:43:02.110717] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.528 [2024-11-20 22:43:02.110804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.528 [2024-11-20 22:43:02.110846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.528 [2024-11-20 22:43:02.110861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2176d00 with addr=10.0.0.2, port=4420 00:23:01.529 [2024-11-20 22:43:02.110870] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176d00 is same with the state(5) to be set 00:23:01.529 [2024-11-20 22:43:02.110884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2176d00 (9): Bad file descriptor 00:23:01.529 [2024-11-20 22:43:02.110911] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:01.529 [2024-11-20 22:43:02.110921] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:01.529 [2024-11-20 22:43:02.110929] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:01.529 [2024-11-20 22:43:02.110942] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.529 [2024-11-20 22:43:02.114633] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:01.529 [2024-11-20 22:43:02.114736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.529 [2024-11-20 22:43:02.114778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.529 [2024-11-20 22:43:02.114794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2124070 with addr=10.0.0.3, port=4420 00:23:01.529 [2024-11-20 22:43:02.114802] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124070 is same with the state(5) to be set 00:23:01.529 [2024-11-20 22:43:02.114816] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2124070 (9): Bad file descriptor 00:23:01.529 [2024-11-20 22:43:02.114829] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:01.529 [2024-11-20 22:43:02.114838] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:01.529 [2024-11-20 22:43:02.114845] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:01.529 [2024-11-20 22:43:02.114858] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.529 [2024-11-20 22:43:02.118912] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:01.529 [2024-11-20 22:43:02.118939] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:01.529 [2024-11-20 22:43:02.118957] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:01.529 [2024-11-20 22:43:02.118987] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:23:01.529 [2024-11-20 22:43:02.119000] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:01.529 [2024-11-20 22:43:02.119012] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:01.529 [2024-11-20 22:43:02.204976] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:01.529 [2024-11-20 22:43:02.205029] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:02.466 22:43:02 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:23:02.466 22:43:02 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:02.466 22:43:02 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:02.466 22:43:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.466 22:43:02 -- common/autotest_common.sh@10 -- # set +x 00:23:02.466 22:43:02 -- host/mdns_discovery.sh@68 -- # sort 00:23:02.466 22:43:02 -- host/mdns_discovery.sh@68 -- # xargs 00:23:02.466 22:43:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.466 22:43:03 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:02.466 22:43:03 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:23:02.466 22:43:03 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.466 22:43:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.466 22:43:03 -- common/autotest_common.sh@10 -- # set +x 00:23:02.466 22:43:03 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:02.466 22:43:03 -- host/mdns_discovery.sh@64 -- # sort 00:23:02.466 22:43:03 -- host/mdns_discovery.sh@64 -- # xargs 00:23:02.466 22:43:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.466 22:43:03 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:02.466 22:43:03 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:23:02.466 22:43:03 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:02.466 22:43:03 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:02.466 22:43:03 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:02.466 22:43:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.466 22:43:03 -- common/autotest_common.sh@10 -- # set +x 00:23:02.466 22:43:03 -- host/mdns_discovery.sh@72 -- # xargs 00:23:02.466 22:43:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.466 22:43:03 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:23:02.466 22:43:03 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:23:02.466 22:43:03 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:02.466 22:43:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.466 22:43:03 -- common/autotest_common.sh@10 -- # set +x 00:23:02.466 22:43:03 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:02.466 22:43:03 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:02.466 22:43:03 -- host/mdns_discovery.sh@72 -- # xargs 00:23:02.466 22:43:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.725 22:43:03 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:23:02.725 22:43:03 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:23:02.725 22:43:03 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:02.725 22:43:03 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:02.725 22:43:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.725 22:43:03 -- common/autotest_common.sh@10 -- # set +x 00:23:02.725 22:43:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.725 22:43:03 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:02.725 22:43:03 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:02.725 22:43:03 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:23:02.725 22:43:03 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:02.725 22:43:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.725 22:43:03 -- common/autotest_common.sh@10 -- # set +x 00:23:02.725 22:43:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.725 22:43:03 -- host/mdns_discovery.sh@172 -- # sleep 1 00:23:02.725 [2024-11-20 22:43:03.357068] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:03.661 22:43:04 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:23:03.661 22:43:04 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:03.661 22:43:04 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:03.661 22:43:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.661 22:43:04 -- common/autotest_common.sh@10 -- # set +x 00:23:03.661 22:43:04 -- host/mdns_discovery.sh@80 -- # sort 00:23:03.661 22:43:04 -- host/mdns_discovery.sh@80 -- # xargs 00:23:03.661 22:43:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.661 22:43:04 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:23:03.661 22:43:04 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:23:03.661 22:43:04 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:03.661 22:43:04 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:03.661 22:43:04 -- host/mdns_discovery.sh@68 -- # sort 00:23:03.661 22:43:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.661 22:43:04 -- common/autotest_common.sh@10 -- # set +x 00:23:03.661 22:43:04 -- host/mdns_discovery.sh@68 -- # xargs 00:23:03.661 22:43:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.920 22:43:04 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:23:03.920 22:43:04 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:03.920 22:43:04 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:03.920 22:43:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.920 22:43:04 -- common/autotest_common.sh@10 -- # set +x 00:23:03.920 22:43:04 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:03.920 22:43:04 -- host/mdns_discovery.sh@64 -- # sort 00:23:03.920 22:43:04 -- host/mdns_discovery.sh@64 -- # xargs 00:23:03.920 22:43:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.920 22:43:04 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:23:03.920 22:43:04 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:03.920 22:43:04 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:03.920 22:43:04 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:03.920 22:43:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.920 22:43:04 -- common/autotest_common.sh@10 -- # set +x 00:23:03.920 22:43:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.920 22:43:04 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:23:03.920 22:43:04 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:23:03.920 22:43:04 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:23:03.920 22:43:04 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:03.920 22:43:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.920 22:43:04 -- common/autotest_common.sh@10 -- # set +x 00:23:03.920 22:43:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.920 22:43:04 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:03.920 22:43:04 -- common/autotest_common.sh@650 -- # local es=0 00:23:03.920 22:43:04 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:03.920 22:43:04 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:03.920 22:43:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:03.920 22:43:04 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:03.920 22:43:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:03.920 22:43:04 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:03.920 22:43:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.920 22:43:04 -- common/autotest_common.sh@10 -- # set +x 00:23:03.920 [2024-11-20 22:43:04.524216] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:03.920 2024/11/20 22:43:04 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:03.920 request: 00:23:03.920 { 00:23:03.920 "method": "bdev_nvme_start_mdns_discovery", 00:23:03.920 "params": { 00:23:03.920 "name": "mdns", 00:23:03.920 "svcname": "_nvme-disc._http", 00:23:03.920 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:03.920 } 00:23:03.920 } 00:23:03.920 Got JSON-RPC error response 00:23:03.920 GoRPCClient: error on JSON-RPC call 00:23:03.920 22:43:04 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:03.920 22:43:04 -- common/autotest_common.sh@653 -- # es=1 00:23:03.920 22:43:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:03.920 22:43:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:03.920 22:43:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:03.920 22:43:04 -- host/mdns_discovery.sh@183 -- # sleep 5 00:23:04.488 [2024-11-20 22:43:04.912840] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:04.488 [2024-11-20 22:43:05.012822] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:04.488 [2024-11-20 22:43:05.112827] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:04.488 [2024-11-20 22:43:05.112847] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:04.488 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:04.488 cookie is 0 00:23:04.488 is_local: 1 00:23:04.488 our_own: 0 00:23:04.488 wide_area: 0 00:23:04.488 multicast: 1 00:23:04.488 cached: 1 00:23:04.488 [2024-11-20 22:43:05.212830] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:04.488 [2024-11-20 22:43:05.212851] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:04.488 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:04.488 cookie is 0 00:23:04.488 is_local: 1 00:23:04.488 our_own: 0 00:23:04.488 wide_area: 0 00:23:04.488 multicast: 1 00:23:04.488 cached: 1 00:23:05.425 [2024-11-20 22:43:06.122586] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:05.425 [2024-11-20 22:43:06.122608] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:05.425 [2024-11-20 22:43:06.122639] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:05.683 [2024-11-20 22:43:06.208708] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:23:05.683 [2024-11-20 22:43:06.222456] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:05.683 [2024-11-20 22:43:06.222473] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:05.683 [2024-11-20 22:43:06.222487] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:05.683 [2024-11-20 22:43:06.275522] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:05.683 [2024-11-20 22:43:06.275546] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:05.683 [2024-11-20 22:43:06.308632] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:23:05.683 [2024-11-20 22:43:06.367425] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:05.683 [2024-11-20 22:43:06.367453] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:08.972 22:43:09 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:23:08.972 22:43:09 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:08.972 22:43:09 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:08.972 22:43:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.972 22:43:09 -- common/autotest_common.sh@10 -- # set +x 00:23:08.972 22:43:09 -- host/mdns_discovery.sh@80 -- # sort 00:23:08.972 22:43:09 -- host/mdns_discovery.sh@80 -- # xargs 00:23:08.972 22:43:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.972 22:43:09 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:23:08.972 22:43:09 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:23:08.972 22:43:09 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:08.972 22:43:09 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:08.972 22:43:09 -- host/mdns_discovery.sh@76 -- # xargs 00:23:08.972 22:43:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.972 22:43:09 -- common/autotest_common.sh@10 -- # set +x 00:23:08.972 22:43:09 -- host/mdns_discovery.sh@76 -- # sort 00:23:08.972 22:43:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.972 22:43:09 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:08.972 22:43:09 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:23:08.972 22:43:09 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:08.972 22:43:09 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:08.972 22:43:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.972 22:43:09 -- common/autotest_common.sh@10 -- # set +x 00:23:08.972 22:43:09 -- host/mdns_discovery.sh@64 -- # sort 00:23:08.972 22:43:09 -- host/mdns_discovery.sh@64 -- # xargs 00:23:08.972 22:43:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.972 22:43:09 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:08.972 22:43:09 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:08.972 22:43:09 -- common/autotest_common.sh@650 -- # local es=0 00:23:08.972 22:43:09 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:08.972 22:43:09 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:08.972 22:43:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:08.972 22:43:09 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:09.231 22:43:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.231 22:43:09 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:09.231 22:43:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.231 22:43:09 -- common/autotest_common.sh@10 -- # set +x 00:23:09.231 [2024-11-20 22:43:09.706413] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:23:09.231 2024/11/20 22:43:09 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:09.231 request: 00:23:09.231 { 00:23:09.231 "method": "bdev_nvme_start_mdns_discovery", 00:23:09.231 "params": { 00:23:09.231 "name": "cdc", 00:23:09.231 "svcname": "_nvme-disc._tcp", 00:23:09.231 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:09.231 } 00:23:09.231 } 00:23:09.231 Got JSON-RPC error response 00:23:09.231 GoRPCClient: error on JSON-RPC call 00:23:09.231 22:43:09 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:09.231 22:43:09 -- common/autotest_common.sh@653 -- # es=1 00:23:09.231 22:43:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:09.231 22:43:09 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:09.231 22:43:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:09.231 22:43:09 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:23:09.232 22:43:09 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:09.232 22:43:09 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:09.232 22:43:09 -- host/mdns_discovery.sh@76 -- # sort 00:23:09.232 22:43:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.232 22:43:09 -- common/autotest_common.sh@10 -- # set +x 00:23:09.232 22:43:09 -- host/mdns_discovery.sh@76 -- # xargs 00:23:09.232 22:43:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.232 22:43:09 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:09.232 22:43:09 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:23:09.232 22:43:09 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:09.232 22:43:09 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:09.232 22:43:09 -- host/mdns_discovery.sh@64 -- # sort 00:23:09.232 22:43:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.232 22:43:09 -- common/autotest_common.sh@10 -- # set +x 00:23:09.232 22:43:09 -- host/mdns_discovery.sh@64 -- # xargs 00:23:09.232 22:43:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.232 22:43:09 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:09.232 22:43:09 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:09.232 22:43:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.232 22:43:09 -- common/autotest_common.sh@10 -- # set +x 00:23:09.232 22:43:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.232 22:43:09 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:23:09.232 22:43:09 -- host/mdns_discovery.sh@197 -- # kill 98179 00:23:09.232 22:43:09 -- host/mdns_discovery.sh@200 -- # wait 98179 00:23:09.232 [2024-11-20 22:43:09.930536] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:09.490 22:43:10 -- host/mdns_discovery.sh@201 -- # kill 98270 00:23:09.490 22:43:10 -- host/mdns_discovery.sh@202 -- # kill 98209 00:23:09.490 Got SIGTERM, quitting. 00:23:09.490 22:43:10 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:23:09.490 22:43:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:09.490 Got SIGTERM, quitting. 00:23:09.490 22:43:10 -- nvmf/common.sh@116 -- # sync 00:23:09.490 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:09.490 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:09.490 avahi-daemon 0.8 exiting. 00:23:09.490 22:43:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:09.490 22:43:10 -- nvmf/common.sh@119 -- # set +e 00:23:09.490 22:43:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:09.490 22:43:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:09.490 rmmod nvme_tcp 00:23:09.490 rmmod nvme_fabrics 00:23:09.490 rmmod nvme_keyring 00:23:09.490 22:43:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:09.490 22:43:10 -- nvmf/common.sh@123 -- # set -e 00:23:09.490 22:43:10 -- nvmf/common.sh@124 -- # return 0 00:23:09.490 22:43:10 -- nvmf/common.sh@477 -- # '[' -n 98133 ']' 00:23:09.490 22:43:10 -- nvmf/common.sh@478 -- # killprocess 98133 00:23:09.490 22:43:10 -- common/autotest_common.sh@936 -- # '[' -z 98133 ']' 00:23:09.490 22:43:10 -- common/autotest_common.sh@940 -- # kill -0 98133 00:23:09.490 22:43:10 -- common/autotest_common.sh@941 -- # uname 00:23:09.490 22:43:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:09.490 22:43:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98133 00:23:09.490 22:43:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:09.490 22:43:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:09.490 killing process with pid 98133 00:23:09.490 22:43:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98133' 00:23:09.490 22:43:10 -- common/autotest_common.sh@955 -- # kill 98133 00:23:09.490 22:43:10 -- common/autotest_common.sh@960 -- # wait 98133 00:23:09.749 22:43:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:09.749 22:43:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:09.749 22:43:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:09.749 22:43:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:09.749 22:43:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:09.749 22:43:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.749 22:43:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:09.749 22:43:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.749 22:43:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:09.749 00:23:09.749 real 0m20.695s 00:23:09.749 user 0m40.364s 00:23:09.749 sys 0m1.989s 00:23:09.749 22:43:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:09.749 22:43:10 -- common/autotest_common.sh@10 -- # set +x 00:23:09.749 ************************************ 00:23:09.749 END TEST nvmf_mdns_discovery 00:23:09.749 ************************************ 00:23:10.009 22:43:10 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:23:10.009 22:43:10 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:10.009 22:43:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:10.009 22:43:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:10.009 22:43:10 -- common/autotest_common.sh@10 -- # set +x 00:23:10.009 ************************************ 00:23:10.009 START TEST nvmf_multipath 00:23:10.009 ************************************ 00:23:10.009 22:43:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:10.009 * Looking for test storage... 00:23:10.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:10.009 22:43:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:10.009 22:43:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:10.009 22:43:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:10.009 22:43:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:10.009 22:43:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:10.009 22:43:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:10.009 22:43:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:10.009 22:43:10 -- scripts/common.sh@335 -- # IFS=.-: 00:23:10.009 22:43:10 -- scripts/common.sh@335 -- # read -ra ver1 00:23:10.009 22:43:10 -- scripts/common.sh@336 -- # IFS=.-: 00:23:10.009 22:43:10 -- scripts/common.sh@336 -- # read -ra ver2 00:23:10.009 22:43:10 -- scripts/common.sh@337 -- # local 'op=<' 00:23:10.009 22:43:10 -- scripts/common.sh@339 -- # ver1_l=2 00:23:10.009 22:43:10 -- scripts/common.sh@340 -- # ver2_l=1 00:23:10.009 22:43:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:10.009 22:43:10 -- scripts/common.sh@343 -- # case "$op" in 00:23:10.009 22:43:10 -- scripts/common.sh@344 -- # : 1 00:23:10.009 22:43:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:10.009 22:43:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:10.009 22:43:10 -- scripts/common.sh@364 -- # decimal 1 00:23:10.009 22:43:10 -- scripts/common.sh@352 -- # local d=1 00:23:10.009 22:43:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:10.009 22:43:10 -- scripts/common.sh@354 -- # echo 1 00:23:10.009 22:43:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:10.009 22:43:10 -- scripts/common.sh@365 -- # decimal 2 00:23:10.009 22:43:10 -- scripts/common.sh@352 -- # local d=2 00:23:10.009 22:43:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:10.009 22:43:10 -- scripts/common.sh@354 -- # echo 2 00:23:10.009 22:43:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:10.009 22:43:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:10.009 22:43:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:10.009 22:43:10 -- scripts/common.sh@367 -- # return 0 00:23:10.009 22:43:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:10.009 22:43:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:10.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.009 --rc genhtml_branch_coverage=1 00:23:10.009 --rc genhtml_function_coverage=1 00:23:10.009 --rc genhtml_legend=1 00:23:10.009 --rc geninfo_all_blocks=1 00:23:10.009 --rc geninfo_unexecuted_blocks=1 00:23:10.009 00:23:10.009 ' 00:23:10.009 22:43:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:10.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.009 --rc genhtml_branch_coverage=1 00:23:10.009 --rc genhtml_function_coverage=1 00:23:10.009 --rc genhtml_legend=1 00:23:10.009 --rc geninfo_all_blocks=1 00:23:10.009 --rc geninfo_unexecuted_blocks=1 00:23:10.009 00:23:10.009 ' 00:23:10.009 22:43:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:10.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.009 --rc genhtml_branch_coverage=1 00:23:10.009 --rc genhtml_function_coverage=1 00:23:10.009 --rc genhtml_legend=1 00:23:10.009 --rc geninfo_all_blocks=1 00:23:10.009 --rc geninfo_unexecuted_blocks=1 00:23:10.009 00:23:10.009 ' 00:23:10.009 22:43:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:10.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.009 --rc genhtml_branch_coverage=1 00:23:10.009 --rc genhtml_function_coverage=1 00:23:10.009 --rc genhtml_legend=1 00:23:10.009 --rc geninfo_all_blocks=1 00:23:10.009 --rc geninfo_unexecuted_blocks=1 00:23:10.009 00:23:10.009 ' 00:23:10.009 22:43:10 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:10.009 22:43:10 -- nvmf/common.sh@7 -- # uname -s 00:23:10.009 22:43:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:10.009 22:43:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:10.009 22:43:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:10.009 22:43:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:10.009 22:43:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:10.009 22:43:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:10.009 22:43:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:10.009 22:43:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:10.009 22:43:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:10.009 22:43:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:10.009 22:43:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:23:10.009 22:43:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:23:10.009 22:43:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:10.009 22:43:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:10.009 22:43:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:10.009 22:43:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:10.009 22:43:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:10.009 22:43:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.009 22:43:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.009 22:43:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.010 22:43:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.010 22:43:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.010 22:43:10 -- paths/export.sh@5 -- # export PATH 00:23:10.010 22:43:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.010 22:43:10 -- nvmf/common.sh@46 -- # : 0 00:23:10.010 22:43:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:10.010 22:43:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:10.010 22:43:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:10.010 22:43:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:10.010 22:43:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:10.010 22:43:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:10.010 22:43:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:10.010 22:43:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:10.010 22:43:10 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:10.010 22:43:10 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:10.010 22:43:10 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:10.010 22:43:10 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:10.010 22:43:10 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:10.010 22:43:10 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:10.010 22:43:10 -- host/multipath.sh@30 -- # nvmftestinit 00:23:10.010 22:43:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:10.010 22:43:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:10.010 22:43:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:10.010 22:43:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:10.010 22:43:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:10.010 22:43:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.010 22:43:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.010 22:43:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.010 22:43:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:10.010 22:43:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:10.269 22:43:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:10.269 22:43:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:10.269 22:43:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:10.269 22:43:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:10.269 22:43:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:10.269 22:43:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:10.269 22:43:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:10.269 22:43:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:10.269 22:43:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:10.269 22:43:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:10.269 22:43:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:10.269 22:43:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:10.269 22:43:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:10.269 22:43:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:10.269 22:43:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:10.269 22:43:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:10.269 22:43:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:10.269 22:43:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:10.269 Cannot find device "nvmf_tgt_br" 00:23:10.269 22:43:10 -- nvmf/common.sh@154 -- # true 00:23:10.269 22:43:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:10.269 Cannot find device "nvmf_tgt_br2" 00:23:10.269 22:43:10 -- nvmf/common.sh@155 -- # true 00:23:10.269 22:43:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:10.269 22:43:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:10.269 Cannot find device "nvmf_tgt_br" 00:23:10.269 22:43:10 -- nvmf/common.sh@157 -- # true 00:23:10.269 22:43:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:10.269 Cannot find device "nvmf_tgt_br2" 00:23:10.269 22:43:10 -- nvmf/common.sh@158 -- # true 00:23:10.269 22:43:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:10.269 22:43:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:10.269 22:43:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:10.269 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:10.269 22:43:10 -- nvmf/common.sh@161 -- # true 00:23:10.269 22:43:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:10.269 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:10.269 22:43:10 -- nvmf/common.sh@162 -- # true 00:23:10.269 22:43:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:10.269 22:43:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:10.269 22:43:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:10.269 22:43:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:10.269 22:43:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:10.269 22:43:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:10.269 22:43:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:10.269 22:43:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:10.269 22:43:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:10.269 22:43:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:10.269 22:43:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:10.269 22:43:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:10.269 22:43:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:10.269 22:43:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:10.529 22:43:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:10.529 22:43:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:10.529 22:43:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:10.529 22:43:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:10.529 22:43:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:10.529 22:43:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:10.529 22:43:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:10.529 22:43:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:10.529 22:43:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:10.529 22:43:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:10.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:10.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:23:10.529 00:23:10.529 --- 10.0.0.2 ping statistics --- 00:23:10.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.529 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:23:10.529 22:43:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:10.529 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:10.529 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:23:10.529 00:23:10.529 --- 10.0.0.3 ping statistics --- 00:23:10.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.529 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:23:10.529 22:43:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:10.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:10.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:23:10.529 00:23:10.529 --- 10.0.0.1 ping statistics --- 00:23:10.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.529 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:23:10.529 22:43:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:10.529 22:43:11 -- nvmf/common.sh@421 -- # return 0 00:23:10.529 22:43:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:10.529 22:43:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:10.529 22:43:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:10.529 22:43:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:10.529 22:43:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:10.529 22:43:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:10.529 22:43:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:10.529 22:43:11 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:10.529 22:43:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:10.529 22:43:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:10.529 22:43:11 -- common/autotest_common.sh@10 -- # set +x 00:23:10.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.529 22:43:11 -- nvmf/common.sh@469 -- # nvmfpid=98789 00:23:10.529 22:43:11 -- nvmf/common.sh@470 -- # waitforlisten 98789 00:23:10.529 22:43:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:10.529 22:43:11 -- common/autotest_common.sh@829 -- # '[' -z 98789 ']' 00:23:10.529 22:43:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.529 22:43:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:10.529 22:43:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.529 22:43:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:10.529 22:43:11 -- common/autotest_common.sh@10 -- # set +x 00:23:10.529 [2024-11-20 22:43:11.170795] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:10.529 [2024-11-20 22:43:11.170888] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.789 [2024-11-20 22:43:11.310155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:10.789 [2024-11-20 22:43:11.373552] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:10.789 [2024-11-20 22:43:11.373676] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.789 [2024-11-20 22:43:11.373688] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.789 [2024-11-20 22:43:11.373695] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.789 [2024-11-20 22:43:11.374375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.789 [2024-11-20 22:43:11.374402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.725 22:43:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:11.725 22:43:12 -- common/autotest_common.sh@862 -- # return 0 00:23:11.725 22:43:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:11.725 22:43:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:11.725 22:43:12 -- common/autotest_common.sh@10 -- # set +x 00:23:11.725 22:43:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.725 22:43:12 -- host/multipath.sh@33 -- # nvmfapp_pid=98789 00:23:11.725 22:43:12 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:11.725 [2024-11-20 22:43:12.435807] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.725 22:43:12 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:11.984 Malloc0 00:23:11.984 22:43:12 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:12.244 22:43:12 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:12.503 22:43:13 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:12.762 [2024-11-20 22:43:13.346584] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.762 22:43:13 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:13.021 [2024-11-20 22:43:13.562895] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:13.021 22:43:13 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:13.021 22:43:13 -- host/multipath.sh@44 -- # bdevperf_pid=98893 00:23:13.021 22:43:13 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:13.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.021 22:43:13 -- host/multipath.sh@47 -- # waitforlisten 98893 /var/tmp/bdevperf.sock 00:23:13.021 22:43:13 -- common/autotest_common.sh@829 -- # '[' -z 98893 ']' 00:23:13.021 22:43:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.021 22:43:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:13.021 22:43:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.021 22:43:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:13.021 22:43:13 -- common/autotest_common.sh@10 -- # set +x 00:23:14.398 22:43:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.398 22:43:14 -- common/autotest_common.sh@862 -- # return 0 00:23:14.398 22:43:14 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:14.398 22:43:14 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:14.657 Nvme0n1 00:23:14.657 22:43:15 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:14.917 Nvme0n1 00:23:15.175 22:43:15 -- host/multipath.sh@78 -- # sleep 1 00:23:15.176 22:43:15 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:16.109 22:43:16 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:16.109 22:43:16 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:16.368 22:43:16 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:16.627 22:43:17 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:16.627 22:43:17 -- host/multipath.sh@65 -- # dtrace_pid=98980 00:23:16.627 22:43:17 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98789 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:16.627 22:43:17 -- host/multipath.sh@66 -- # sleep 6 00:23:23.191 22:43:23 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:23.191 22:43:23 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:23.191 22:43:23 -- host/multipath.sh@67 -- # active_port=4421 00:23:23.191 22:43:23 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:23.191 Attaching 4 probes... 00:23:23.191 @path[10.0.0.2, 4421]: 20439 00:23:23.191 @path[10.0.0.2, 4421]: 20829 00:23:23.191 @path[10.0.0.2, 4421]: 20713 00:23:23.191 @path[10.0.0.2, 4421]: 20841 00:23:23.191 @path[10.0.0.2, 4421]: 20874 00:23:23.191 22:43:23 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:23.191 22:43:23 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:23.191 22:43:23 -- host/multipath.sh@69 -- # sed -n 1p 00:23:23.191 22:43:23 -- host/multipath.sh@69 -- # port=4421 00:23:23.191 22:43:23 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:23.191 22:43:23 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:23.191 22:43:23 -- host/multipath.sh@72 -- # kill 98980 00:23:23.191 22:43:23 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:23.191 22:43:23 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:23.191 22:43:23 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:23.191 22:43:23 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:23.191 22:43:23 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:23.191 22:43:23 -- host/multipath.sh@65 -- # dtrace_pid=99113 00:23:23.191 22:43:23 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98789 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:23.191 22:43:23 -- host/multipath.sh@66 -- # sleep 6 00:23:29.758 22:43:29 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:29.758 22:43:29 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:29.758 22:43:30 -- host/multipath.sh@67 -- # active_port=4420 00:23:29.758 22:43:30 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:29.758 Attaching 4 probes... 00:23:29.758 @path[10.0.0.2, 4420]: 21444 00:23:29.758 @path[10.0.0.2, 4420]: 21670 00:23:29.758 @path[10.0.0.2, 4420]: 21683 00:23:29.758 @path[10.0.0.2, 4420]: 21730 00:23:29.758 @path[10.0.0.2, 4420]: 21729 00:23:29.758 22:43:30 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:29.758 22:43:30 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:29.758 22:43:30 -- host/multipath.sh@69 -- # sed -n 1p 00:23:29.758 22:43:30 -- host/multipath.sh@69 -- # port=4420 00:23:29.758 22:43:30 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:29.758 22:43:30 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:29.758 22:43:30 -- host/multipath.sh@72 -- # kill 99113 00:23:29.758 22:43:30 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:29.758 22:43:30 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:29.758 22:43:30 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:29.758 22:43:30 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:30.017 22:43:30 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:30.017 22:43:30 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98789 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:30.017 22:43:30 -- host/multipath.sh@65 -- # dtrace_pid=99243 00:23:30.017 22:43:30 -- host/multipath.sh@66 -- # sleep 6 00:23:36.583 22:43:36 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:36.583 22:43:36 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:36.583 22:43:36 -- host/multipath.sh@67 -- # active_port=4421 00:23:36.583 22:43:36 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:36.583 Attaching 4 probes... 00:23:36.583 @path[10.0.0.2, 4421]: 14881 00:23:36.583 @path[10.0.0.2, 4421]: 20536 00:23:36.583 @path[10.0.0.2, 4421]: 20511 00:23:36.583 @path[10.0.0.2, 4421]: 20492 00:23:36.583 @path[10.0.0.2, 4421]: 20544 00:23:36.583 22:43:36 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:36.583 22:43:36 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:36.583 22:43:36 -- host/multipath.sh@69 -- # sed -n 1p 00:23:36.583 22:43:36 -- host/multipath.sh@69 -- # port=4421 00:23:36.583 22:43:36 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:36.583 22:43:36 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:36.583 22:43:36 -- host/multipath.sh@72 -- # kill 99243 00:23:36.583 22:43:36 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:36.583 22:43:36 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:23:36.583 22:43:36 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:36.583 22:43:37 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:36.842 22:43:37 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:23:36.842 22:43:37 -- host/multipath.sh@65 -- # dtrace_pid=99374 00:23:36.842 22:43:37 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98789 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:36.842 22:43:37 -- host/multipath.sh@66 -- # sleep 6 00:23:43.405 22:43:43 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:43.405 22:43:43 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:23:43.405 22:43:43 -- host/multipath.sh@67 -- # active_port= 00:23:43.405 22:43:43 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:43.405 Attaching 4 probes... 00:23:43.405 00:23:43.405 00:23:43.405 00:23:43.405 00:23:43.405 00:23:43.405 22:43:43 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:43.405 22:43:43 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:43.405 22:43:43 -- host/multipath.sh@69 -- # sed -n 1p 00:23:43.405 22:43:43 -- host/multipath.sh@69 -- # port= 00:23:43.405 22:43:43 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:23:43.405 22:43:43 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:23:43.405 22:43:43 -- host/multipath.sh@72 -- # kill 99374 00:23:43.405 22:43:43 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:43.405 22:43:43 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:23:43.405 22:43:43 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:43.405 22:43:43 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:43.405 22:43:44 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:23:43.405 22:43:44 -- host/multipath.sh@65 -- # dtrace_pid=99504 00:23:43.405 22:43:44 -- host/multipath.sh@66 -- # sleep 6 00:23:43.405 22:43:44 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98789 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:50.000 22:43:50 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:50.000 22:43:50 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:50.000 22:43:50 -- host/multipath.sh@67 -- # active_port=4421 00:23:50.000 22:43:50 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:50.000 Attaching 4 probes... 00:23:50.000 @path[10.0.0.2, 4421]: 20569 00:23:50.000 @path[10.0.0.2, 4421]: 20891 00:23:50.000 @path[10.0.0.2, 4421]: 21022 00:23:50.000 @path[10.0.0.2, 4421]: 20998 00:23:50.000 @path[10.0.0.2, 4421]: 21035 00:23:50.000 22:43:50 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:50.000 22:43:50 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:50.000 22:43:50 -- host/multipath.sh@69 -- # sed -n 1p 00:23:50.000 22:43:50 -- host/multipath.sh@69 -- # port=4421 00:23:50.000 22:43:50 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:50.000 22:43:50 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:50.000 22:43:50 -- host/multipath.sh@72 -- # kill 99504 00:23:50.000 22:43:50 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:50.000 22:43:50 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:50.000 [2024-11-20 22:43:50.675882] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.675946] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.675976] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.675986] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.675994] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.676002] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.676012] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.676020] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.676028] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.676036] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.676044] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.676052] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.676060] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.676068] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.676076] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.676085] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.676092] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.676100] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.676107] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.676115] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.676123] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.676131] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.676138] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.676145] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.676153] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.676160] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.000 [2024-11-20 22:43:50.676168] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676176] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676191] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676199] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676207] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676216] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676224] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676240] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676249] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676257] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676265] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676274] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676300] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676341] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676351] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676363] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676372] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676381] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676389] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676398] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676406] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676415] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676423] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676432] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676440] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676448] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676456] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676464] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676472] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676480] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676488] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676495] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676504] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676511] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676522] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676531] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676539] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676556] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676565] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676578] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676586] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676594] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676628] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676636] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676662] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676671] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676685] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676694] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676709] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676717] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676725] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676733] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676741] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676749] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676756] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676763] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676770] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676778] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676785] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676793] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676802] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676810] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676817] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.001 [2024-11-20 22:43:50.676824] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.002 [2024-11-20 22:43:50.676832] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e70 is same with the state(5) to be set 00:23:50.002 22:43:50 -- host/multipath.sh@101 -- # sleep 1 00:23:51.385 22:43:51 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:23:51.385 22:43:51 -- host/multipath.sh@65 -- # dtrace_pid=99640 00:23:51.385 22:43:51 -- host/multipath.sh@66 -- # sleep 6 00:23:51.385 22:43:51 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98789 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:57.952 22:43:57 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:57.952 22:43:57 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:57.952 22:43:57 -- host/multipath.sh@67 -- # active_port=4420 00:23:57.952 22:43:57 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:57.952 Attaching 4 probes... 00:23:57.952 @path[10.0.0.2, 4420]: 22548 00:23:57.952 @path[10.0.0.2, 4420]: 22819 00:23:57.952 @path[10.0.0.2, 4420]: 22418 00:23:57.952 @path[10.0.0.2, 4420]: 22331 00:23:57.952 @path[10.0.0.2, 4420]: 22388 00:23:57.952 22:43:57 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:57.952 22:43:57 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:57.952 22:43:57 -- host/multipath.sh@69 -- # sed -n 1p 00:23:57.952 22:43:57 -- host/multipath.sh@69 -- # port=4420 00:23:57.952 22:43:57 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:57.952 22:43:57 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:57.952 22:43:57 -- host/multipath.sh@72 -- # kill 99640 00:23:57.952 22:43:57 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:57.952 22:43:57 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:57.952 [2024-11-20 22:43:58.226837] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:57.952 22:43:58 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:57.952 22:43:58 -- host/multipath.sh@111 -- # sleep 6 00:24:04.517 22:44:04 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:04.517 22:44:04 -- host/multipath.sh@65 -- # dtrace_pid=99832 00:24:04.517 22:44:04 -- host/multipath.sh@66 -- # sleep 6 00:24:04.517 22:44:04 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98789 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:11.094 22:44:10 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:11.094 22:44:10 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:11.094 22:44:10 -- host/multipath.sh@67 -- # active_port=4421 00:24:11.094 22:44:10 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:11.094 Attaching 4 probes... 00:24:11.094 @path[10.0.0.2, 4421]: 20329 00:24:11.094 @path[10.0.0.2, 4421]: 20672 00:24:11.094 @path[10.0.0.2, 4421]: 20695 00:24:11.094 @path[10.0.0.2, 4421]: 20764 00:24:11.094 @path[10.0.0.2, 4421]: 20670 00:24:11.094 22:44:10 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:11.094 22:44:10 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:11.094 22:44:10 -- host/multipath.sh@69 -- # sed -n 1p 00:24:11.094 22:44:10 -- host/multipath.sh@69 -- # port=4421 00:24:11.094 22:44:10 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:11.094 22:44:10 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:11.094 22:44:10 -- host/multipath.sh@72 -- # kill 99832 00:24:11.094 22:44:10 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:11.094 22:44:10 -- host/multipath.sh@114 -- # killprocess 98893 00:24:11.094 22:44:10 -- common/autotest_common.sh@936 -- # '[' -z 98893 ']' 00:24:11.094 22:44:10 -- common/autotest_common.sh@940 -- # kill -0 98893 00:24:11.094 22:44:10 -- common/autotest_common.sh@941 -- # uname 00:24:11.094 22:44:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:11.094 22:44:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98893 00:24:11.094 killing process with pid 98893 00:24:11.094 22:44:10 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:11.094 22:44:10 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:11.094 22:44:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98893' 00:24:11.094 22:44:10 -- common/autotest_common.sh@955 -- # kill 98893 00:24:11.094 22:44:10 -- common/autotest_common.sh@960 -- # wait 98893 00:24:11.094 Connection closed with partial response: 00:24:11.094 00:24:11.094 00:24:11.094 22:44:11 -- host/multipath.sh@116 -- # wait 98893 00:24:11.094 22:44:11 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:11.094 [2024-11-20 22:43:13.636322] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:11.094 [2024-11-20 22:43:13.636446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98893 ] 00:24:11.094 [2024-11-20 22:43:13.778318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.094 [2024-11-20 22:43:13.855006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.094 Running I/O for 90 seconds... 00:24:11.094 [2024-11-20 22:43:23.882934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.094 [2024-11-20 22:43:23.882991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:11.094 [2024-11-20 22:43:23.883039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-11-20 22:43:23.883058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:11.094 [2024-11-20 22:43:23.883076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-11-20 22:43:23.883089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:11.094 [2024-11-20 22:43:23.883107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.094 [2024-11-20 22:43:23.883121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:11.094 [2024-11-20 22:43:23.883138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.094 [2024-11-20 22:43:23.883151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:11.094 [2024-11-20 22:43:23.883169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-11-20 22:43:23.883182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:11.094 [2024-11-20 22:43:23.883200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-11-20 22:43:23.883213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:11.094 [2024-11-20 22:43:23.883231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-11-20 22:43:23.883244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:11.094 [2024-11-20 22:43:23.883262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-11-20 22:43:23.883300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:11.094 [2024-11-20 22:43:23.883322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-11-20 22:43:23.883335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.094 [2024-11-20 22:43:23.883353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-11-20 22:43:23.883378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:11.094 [2024-11-20 22:43:23.883398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-11-20 22:43:23.883412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:11.094 [2024-11-20 22:43:23.883445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-11-20 22:43:23.883458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:11.094 [2024-11-20 22:43:23.883476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-11-20 22:43:23.883489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:11.094 [2024-11-20 22:43:23.883508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-11-20 22:43:23.883520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:11.094 [2024-11-20 22:43:23.883539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-11-20 22:43:23.883553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:11.094 [2024-11-20 22:43:23.883572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-11-20 22:43:23.883585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:11.094 [2024-11-20 22:43:23.883605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-11-20 22:43:23.883618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:11.094 [2024-11-20 22:43:23.883637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-11-20 22:43:23.883666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.094 [2024-11-20 22:43:23.883683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-11-20 22:43:23.883696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.094 [2024-11-20 22:43:23.883714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-11-20 22:43:23.883727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:11.094 [2024-11-20 22:43:23.883745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-11-20 22:43:23.883757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:11.094 [2024-11-20 22:43:23.883775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.094 [2024-11-20 22:43:23.883788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:11.094 [2024-11-20 22:43:23.883814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.094 [2024-11-20 22:43:23.883828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.883846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.883859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.883878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.095 [2024-11-20 22:43:23.883907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.883924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.095 [2024-11-20 22:43:23.883936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.883954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.095 [2024-11-20 22:43:23.883968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.883987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.884000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.886856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.095 [2024-11-20 22:43:23.886888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.886913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.886928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.886946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.886974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.886993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.095 [2024-11-20 22:43:23.887006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.887038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.887070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.095 [2024-11-20 22:43:23.887113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.887147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.095 [2024-11-20 22:43:23.887179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.887211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.095 [2024-11-20 22:43:23.887243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.887273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.887337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.887371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.887404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.887451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.887482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.887513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.887552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.887585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.095 [2024-11-20 22:43:23.887618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.887667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.887697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.095 [2024-11-20 22:43:23.887728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.887757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.887787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.887818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.887847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.887877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.887908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.887936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.887974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.887992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-11-20 22:43:23.888004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:11.095 [2024-11-20 22:43:23.888022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.095 [2024-11-20 22:43:23.888035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.888053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.888065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.891166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.096 [2024-11-20 22:43:23.891200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.891227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.096 [2024-11-20 22:43:23.891243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.891263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.096 [2024-11-20 22:43:23.891287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.891309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.891324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.891344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.891358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.891376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.891390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.891408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.096 [2024-11-20 22:43:23.891422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.891441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.096 [2024-11-20 22:43:23.891455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.891487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.891503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.891522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.891536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.891555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.891569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.891587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.891601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.891620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.891634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.891653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.891667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.891686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.891699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.891718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.891733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.891752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.891765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.892015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.892040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.892062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.096 [2024-11-20 22:43:23.892077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.892096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.096 [2024-11-20 22:43:23.892110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.892129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.892156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.892177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.892191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.892210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.892223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.892242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.892256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.892303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.892320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.892340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.892354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.892373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.892387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.892406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.892421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.892440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.892454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.892473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.892487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.892507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.892521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.892540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.096 [2024-11-20 22:43:23.892555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.892574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.892590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.892618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.892633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.895363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.096 [2024-11-20 22:43:23.895396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.895423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.096 [2024-11-20 22:43:23.895439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.895459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.895472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:11.096 [2024-11-20 22:43:23.895491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.096 [2024-11-20 22:43:23.895505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.895524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-20 22:43:23.895538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.895557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.097 [2024-11-20 22:43:23.895570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.895589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-20 22:43:23.895602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.895621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.097 [2024-11-20 22:43:23.895635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.895654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-20 22:43:23.895667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.895686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.097 [2024-11-20 22:43:23.895699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.895718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.097 [2024-11-20 22:43:23.895731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.895762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-20 22:43:23.895778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.895797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.097 [2024-11-20 22:43:23.895811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.895830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-20 22:43:23.895843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.895862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-20 22:43:23.895876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.895894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-20 22:43:23.895908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.895926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.097 [2024-11-20 22:43:23.895940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.895958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-20 22:43:23.895971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.895990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-20 22:43:23.896004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.896024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-20 22:43:23.896038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.896057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-20 22:43:23.896071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.898340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.097 [2024-11-20 22:43:23.898372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.898398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-20 22:43:23.898413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.898432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.097 [2024-11-20 22:43:23.898457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.898477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.097 [2024-11-20 22:43:23.898491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.898509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.097 [2024-11-20 22:43:23.898522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.898541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.097 [2024-11-20 22:43:23.898554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.898572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.097 [2024-11-20 22:43:23.898585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.898604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-20 22:43:23.898617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:23.898635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.097 [2024-11-20 22:43:23.898649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:30.392102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.097 [2024-11-20 22:43:30.392157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:30.392206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-20 22:43:30.392225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:30.392245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-20 22:43:30.392259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:30.392306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-20 22:43:30.392340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:30.392361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-20 22:43:30.392376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:30.392411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-20 22:43:30.392450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:30.392473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-20 22:43:30.392488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:30.392508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-20 22:43:30.392523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:30.392543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-20 22:43:30.392557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:30.392577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-20 22:43:30.392592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:30.392612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.097 [2024-11-20 22:43:30.392626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:11.097 [2024-11-20 22:43:30.392646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-20 22:43:30.392662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.392713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:91536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-20 22:43:30.392759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.392776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:91544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-20 22:43:30.392789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.392807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:91552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-20 22:43:30.392821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.392838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-20 22:43:30.392851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.392870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.098 [2024-11-20 22:43:30.392883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.392901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.098 [2024-11-20 22:43:30.392914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.392941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:91584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-20 22:43:30.392956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.392976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-20 22:43:30.392989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.393007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.098 [2024-11-20 22:43:30.393021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.393191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:91608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-20 22:43:30.393214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.393238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.098 [2024-11-20 22:43:30.393253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.393273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.098 [2024-11-20 22:43:30.393302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.393341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.098 [2024-11-20 22:43:30.393355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.393376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-20 22:43:30.393390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.393427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-20 22:43:30.393444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.393466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-20 22:43:30.393481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.393502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-20 22:43:30.393516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.393537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-20 22:43:30.393552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.393584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-20 22:43:30.393599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.393620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:91064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-20 22:43:30.393633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.393669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-20 22:43:30.393699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.393718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-20 22:43:30.393731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.393750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:91088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-20 22:43:30.393763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.393783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:91096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-20 22:43:30.393796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.393816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-20 22:43:30.393829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.393874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:91144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-20 22:43:30.393913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.393935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:91152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-20 22:43:30.393949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.393970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-20 22:43:30.393985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.394007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-20 22:43:30.394021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.394041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-20 22:43:30.394055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:11.098 [2024-11-20 22:43:30.394189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.098 [2024-11-20 22:43:30.394230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.394251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.099 [2024-11-20 22:43:30.394265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.394297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:91664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-20 22:43:30.394311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.394331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-20 22:43:30.394345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.394391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-20 22:43:30.394407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.394427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.099 [2024-11-20 22:43:30.394442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.394462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-20 22:43:30.394476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.394496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.099 [2024-11-20 22:43:30.394509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.394530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.099 [2024-11-20 22:43:30.394544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.394564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.099 [2024-11-20 22:43:30.394578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.394597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-20 22:43:30.394611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.394662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-20 22:43:30.394675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.394693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:91744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-20 22:43:30.394713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.394733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-20 22:43:30.394745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.394764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-20 22:43:30.394776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.394795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-20 22:43:30.394807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.394825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:91776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-20 22:43:30.394838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.394857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:91784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-20 22:43:30.394870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.394888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.099 [2024-11-20 22:43:30.394901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.394920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.099 [2024-11-20 22:43:30.394932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.395056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.099 [2024-11-20 22:43:30.395077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.395101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:91816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-20 22:43:30.395115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.395137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.099 [2024-11-20 22:43:30.395150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.395171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-20 22:43:30.395184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.395205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.099 [2024-11-20 22:43:30.395228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.395252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.099 [2024-11-20 22:43:30.395265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.395302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.099 [2024-11-20 22:43:30.395349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.395386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-20 22:43:30.395400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.395423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-20 22:43:30.395437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.395459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.099 [2024-11-20 22:43:30.395473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.395495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.099 [2024-11-20 22:43:30.395509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.395532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.099 [2024-11-20 22:43:30.395545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.395568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-20 22:43:30.395581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.395603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-20 22:43:30.395617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.395640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-20 22:43:30.395670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.395707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:91232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-20 22:43:30.395719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.395740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-20 22:43:30.395752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.395780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-20 22:43:30.395793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.395814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-20 22:43:30.395826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:11.099 [2024-11-20 22:43:30.395847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-20 22:43:30.395860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.395880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.395892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.395913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.395925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.395946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.395958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.395978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.395990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.396023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.396055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.396088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.396120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.100 [2024-11-20 22:43:30.396153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.396200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.100 [2024-11-20 22:43:30.396234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.100 [2024-11-20 22:43:30.396266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.100 [2024-11-20 22:43:30.396332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.396385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.100 [2024-11-20 22:43:30.396421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.100 [2024-11-20 22:43:30.396462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.100 [2024-11-20 22:43:30.396497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.396532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.396567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.100 [2024-11-20 22:43:30.396602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.100 [2024-11-20 22:43:30.396638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:91408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.396695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:91424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.396730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.396765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.396799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.396839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.396874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.396909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:91480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.396944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.396966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.396978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.397000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.397013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.397039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.100 [2024-11-20 22:43:30.397052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.397075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.397087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.397109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.397128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.397152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.397165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.397186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.100 [2024-11-20 22:43:30.397199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.397221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.100 [2024-11-20 22:43:30.397233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.397255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.397268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:11.100 [2024-11-20 22:43:30.397316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.100 [2024-11-20 22:43:30.397333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:30.397356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.101 [2024-11-20 22:43:30.397369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:30.397392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.101 [2024-11-20 22:43:30.397405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:30.397428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.101 [2024-11-20 22:43:30.397446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.319337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.101 [2024-11-20 22:43:37.319402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.319448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.101 [2024-11-20 22:43:37.319468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.319490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.101 [2024-11-20 22:43:37.319504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.319524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.101 [2024-11-20 22:43:37.319538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.319573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.101 [2024-11-20 22:43:37.319589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.319608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.101 [2024-11-20 22:43:37.319632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.319651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.101 [2024-11-20 22:43:37.319696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.319714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.101 [2024-11-20 22:43:37.319726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.319745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.101 [2024-11-20 22:43:37.319758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.320242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.101 [2024-11-20 22:43:37.320266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.320321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.101 [2024-11-20 22:43:37.320349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.320374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.101 [2024-11-20 22:43:37.320388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.320410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.101 [2024-11-20 22:43:37.320424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.320445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.101 [2024-11-20 22:43:37.320459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.320480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.101 [2024-11-20 22:43:37.320494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.320515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.101 [2024-11-20 22:43:37.320529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.320565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.101 [2024-11-20 22:43:37.320581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.320603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.101 [2024-11-20 22:43:37.320617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.320654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.101 [2024-11-20 22:43:37.320699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.320718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.101 [2024-11-20 22:43:37.320732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.320751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.101 [2024-11-20 22:43:37.320764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.320783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.101 [2024-11-20 22:43:37.320796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.320815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.101 [2024-11-20 22:43:37.320828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.320848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.101 [2024-11-20 22:43:37.320860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.320880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.101 [2024-11-20 22:43:37.320893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.320929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.101 [2024-11-20 22:43:37.320942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.320962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.101 [2024-11-20 22:43:37.320975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.320995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.101 [2024-11-20 22:43:37.321010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.321030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.101 [2024-11-20 22:43:37.321050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.321070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.101 [2024-11-20 22:43:37.321084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.321104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.101 [2024-11-20 22:43:37.321117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.321136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.101 [2024-11-20 22:43:37.321150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.321170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.101 [2024-11-20 22:43:37.321183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.321203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.101 [2024-11-20 22:43:37.321217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.321237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.101 [2024-11-20 22:43:37.321250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:11.101 [2024-11-20 22:43:37.321269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.101 [2024-11-20 22:43:37.321298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.321320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.102 [2024-11-20 22:43:37.321334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.321365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.321382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.321403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.321417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.321438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.321452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.321562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.102 [2024-11-20 22:43:37.321607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.321649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.102 [2024-11-20 22:43:37.321679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.321701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.321714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.321735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.102 [2024-11-20 22:43:37.321748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.321769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.321782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.321804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.321817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.321839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.321851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.321910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.321925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.321951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.102 [2024-11-20 22:43:37.321967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.321992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.322006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.322031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.322045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.322071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.102 [2024-11-20 22:43:37.322086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.322111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.322125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.322158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.102 [2024-11-20 22:43:37.322174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.322209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.322239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.322261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.322274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.322315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.322330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.322368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.322383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.322406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.322421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.322446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.322459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.322482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.322496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.322519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.322533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.322556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.322570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.322608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.322621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.322672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.102 [2024-11-20 22:43:37.322685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.322714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.322728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.322750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.102 [2024-11-20 22:43:37.322762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.322784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.322797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.322818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.322831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.322852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.322865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.322887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.322900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.323054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.102 [2024-11-20 22:43:37.323076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.323103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.323117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.323141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.323154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:11.102 [2024-11-20 22:43:37.323177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.102 [2024-11-20 22:43:37.323191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.323215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.103 [2024-11-20 22:43:37.323228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.323251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.103 [2024-11-20 22:43:37.323264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.323373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.103 [2024-11-20 22:43:37.323392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.323419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.103 [2024-11-20 22:43:37.323433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.323459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.103 [2024-11-20 22:43:37.323473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.323505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.103 [2024-11-20 22:43:37.323520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.323545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.103 [2024-11-20 22:43:37.323559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.323585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.103 [2024-11-20 22:43:37.323599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.323624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.103 [2024-11-20 22:43:37.323653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.323677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.103 [2024-11-20 22:43:37.323722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.323746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.103 [2024-11-20 22:43:37.323759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.323782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.103 [2024-11-20 22:43:37.323794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.323818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.103 [2024-11-20 22:43:37.323831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.323854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.103 [2024-11-20 22:43:37.323867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.323890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.103 [2024-11-20 22:43:37.323911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.323935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.103 [2024-11-20 22:43:37.323948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.323971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.103 [2024-11-20 22:43:37.323990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.324014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.103 [2024-11-20 22:43:37.324027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.324050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.103 [2024-11-20 22:43:37.324063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.324086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.103 [2024-11-20 22:43:37.324099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.324123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.103 [2024-11-20 22:43:37.324135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.324160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.103 [2024-11-20 22:43:37.324173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.324196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.103 [2024-11-20 22:43:37.324209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.324232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.103 [2024-11-20 22:43:37.324245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.324268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.103 [2024-11-20 22:43:37.324296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.324336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.103 [2024-11-20 22:43:37.324360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.324388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.103 [2024-11-20 22:43:37.324408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.324434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.103 [2024-11-20 22:43:37.324448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.324473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.103 [2024-11-20 22:43:37.324486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.324510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.103 [2024-11-20 22:43:37.324524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.324548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.103 [2024-11-20 22:43:37.324561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.324586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.103 [2024-11-20 22:43:37.324599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.324624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.103 [2024-11-20 22:43:37.324656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.324707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.103 [2024-11-20 22:43:37.324720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:11.103 [2024-11-20 22:43:37.324744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:37.324757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:37.324781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.104 [2024-11-20 22:43:37.324794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:37.324818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.104 [2024-11-20 22:43:37.324831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.677251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.677347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.677376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.677390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.677437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.677451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.677464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.677477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.677491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.677504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.677517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.677529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.677542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.677555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.677568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.677581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.677594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.677606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.677644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.677656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.677699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.677710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.677722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.677732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.677744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.677755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.677767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.677777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.677790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.677807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.677821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.677832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.677844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.677856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.677867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.677905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.677920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.677933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.677947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.677960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.677973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.677985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.677999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.678011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.678024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.678036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.678050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.678062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.678076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.678088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.678101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.678114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.678127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.678140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.678154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.678174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.678189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.678225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.678238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.678249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.678261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.678273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.678297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.678324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.678354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.678369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.678384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.678397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.678410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.678423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.678436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.678448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.678462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.104 [2024-11-20 22:43:50.678474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.104 [2024-11-20 22:43:50.678488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.678500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.678514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.678526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.678540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.678552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.678573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.678587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.678600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.678612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.678626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.105 [2024-11-20 22:43:50.678653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.678683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.105 [2024-11-20 22:43:50.678694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.678707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.678718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.678730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.105 [2024-11-20 22:43:50.678742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.678754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.105 [2024-11-20 22:43:50.678765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.678777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.678789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.678802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.105 [2024-11-20 22:43:50.678814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.678827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.105 [2024-11-20 22:43:50.678838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.678850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.105 [2024-11-20 22:43:50.678862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.678874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.678885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.678904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.678931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.678946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.678958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.678971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.678982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.678995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.105 [2024-11-20 22:43:50.679006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.679019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.679030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.679043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.679054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.679066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.679078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.679091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.679102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.679114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.679125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.679138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.679150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.679162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.679173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.679186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.679197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.679210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.679221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.679240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.679251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.679264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.679328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.679344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.679357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.679372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.679386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.679400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.105 [2024-11-20 22:43:50.679412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.679426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.679438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.679451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.105 [2024-11-20 22:43:50.679463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.679477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.679489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.679503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.679515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.679529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.679541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.679555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.105 [2024-11-20 22:43:50.679566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.679580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.105 [2024-11-20 22:43:50.679592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.105 [2024-11-20 22:43:50.679605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.106 [2024-11-20 22:43:50.679617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.679637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.106 [2024-11-20 22:43:50.679651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.679695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.106 [2024-11-20 22:43:50.679706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.679719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.106 [2024-11-20 22:43:50.679730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.679743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.106 [2024-11-20 22:43:50.679754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.679767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.106 [2024-11-20 22:43:50.679778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.679791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.106 [2024-11-20 22:43:50.679803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.679816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.106 [2024-11-20 22:43:50.679827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.679840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.106 [2024-11-20 22:43:50.679851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.679863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.106 [2024-11-20 22:43:50.679874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.679887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.106 [2024-11-20 22:43:50.679898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.679910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.106 [2024-11-20 22:43:50.679922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.679934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.106 [2024-11-20 22:43:50.679945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.679957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.106 [2024-11-20 22:43:50.679977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.679991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.106 [2024-11-20 22:43:50.680002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.106 [2024-11-20 22:43:50.680025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.106 [2024-11-20 22:43:50.680049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.106 [2024-11-20 22:43:50.680073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.106 [2024-11-20 22:43:50.680097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.106 [2024-11-20 22:43:50.680121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.106 [2024-11-20 22:43:50.680145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.106 [2024-11-20 22:43:50.680168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.106 [2024-11-20 22:43:50.680192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.106 [2024-11-20 22:43:50.680216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.106 [2024-11-20 22:43:50.680240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.106 [2024-11-20 22:43:50.680263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.106 [2024-11-20 22:43:50.680325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.106 [2024-11-20 22:43:50.680362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.106 [2024-11-20 22:43:50.680390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.106 [2024-11-20 22:43:50.680416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.106 [2024-11-20 22:43:50.680442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.106 [2024-11-20 22:43:50.680468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.106 [2024-11-20 22:43:50.680493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.106 [2024-11-20 22:43:50.680524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.106 [2024-11-20 22:43:50.680551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.106 [2024-11-20 22:43:50.680576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.106 [2024-11-20 22:43:50.680602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.106 [2024-11-20 22:43:50.680628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.106 [2024-11-20 22:43:50.680698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.106 [2024-11-20 22:43:50.680734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.106 [2024-11-20 22:43:50.680746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.106 [2024-11-20 22:43:50.680757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.107 [2024-11-20 22:43:50.680770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.107 [2024-11-20 22:43:50.680781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.107 [2024-11-20 22:43:50.680794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.107 [2024-11-20 22:43:50.680805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.107 [2024-11-20 22:43:50.680817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.107 [2024-11-20 22:43:50.680829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.107 [2024-11-20 22:43:50.680841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.107 [2024-11-20 22:43:50.680853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.107 [2024-11-20 22:43:50.680866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.107 [2024-11-20 22:43:50.680876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.107 [2024-11-20 22:43:50.680889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.107 [2024-11-20 22:43:50.680900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.107 [2024-11-20 22:43:50.680912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.107 [2024-11-20 22:43:50.680924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.107 [2024-11-20 22:43:50.680936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.107 [2024-11-20 22:43:50.680947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.107 [2024-11-20 22:43:50.680959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.107 [2024-11-20 22:43:50.680976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.107 [2024-11-20 22:43:50.680989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01120 is same with the state(5) to be set 00:24:11.107 [2024-11-20 22:43:50.681003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:11.107 [2024-11-20 22:43:50.681012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:11.107 [2024-11-20 22:43:50.681021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2168 len:8 PRP1 0x0 PRP2 0x0 00:24:11.107 [2024-11-20 22:43:50.681037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.107 [2024-11-20 22:43:50.681111] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d01120 was disconnected and freed. reset controller. 00:24:11.107 [2024-11-20 22:43:50.682510] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:11.107 [2024-11-20 22:43:50.682600] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d11b60 (9): Bad file descriptor 00:24:11.107 [2024-11-20 22:43:50.682739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.107 [2024-11-20 22:43:50.682794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.107 [2024-11-20 22:43:50.682815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d11b60 with addr=10.0.0.2, port=4421 00:24:11.107 [2024-11-20 22:43:50.682829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d11b60 is same with the state(5) to be set 00:24:11.107 [2024-11-20 22:43:50.682850] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d11b60 (9): Bad file descriptor 00:24:11.107 [2024-11-20 22:43:50.682870] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:11.107 [2024-11-20 22:43:50.682883] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:11.107 [2024-11-20 22:43:50.682895] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:11.107 [2024-11-20 22:43:50.682916] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.107 [2024-11-20 22:43:50.682929] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:11.107 [2024-11-20 22:44:00.733874] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:11.107 Received shutdown signal, test time was about 55.110734 seconds 00:24:11.107 00:24:11.107 Latency(us) 00:24:11.107 [2024-11-20T22:44:11.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.107 [2024-11-20T22:44:11.841Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:11.107 Verification LBA range: start 0x0 length 0x4000 00:24:11.107 Nvme0n1 : 55.11 12093.49 47.24 0.00 0.00 10568.07 290.44 7015926.69 00:24:11.107 [2024-11-20T22:44:11.841Z] =================================================================================================================== 00:24:11.107 [2024-11-20T22:44:11.841Z] Total : 12093.49 47.24 0.00 0.00 10568.07 290.44 7015926.69 00:24:11.107 22:44:11 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:11.107 22:44:11 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:11.107 22:44:11 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:11.107 22:44:11 -- host/multipath.sh@125 -- # nvmftestfini 00:24:11.107 22:44:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:11.107 22:44:11 -- nvmf/common.sh@116 -- # sync 00:24:11.107 22:44:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:11.107 22:44:11 -- nvmf/common.sh@119 -- # set +e 00:24:11.107 22:44:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:11.107 22:44:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:11.107 rmmod nvme_tcp 00:24:11.107 rmmod nvme_fabrics 00:24:11.107 rmmod nvme_keyring 00:24:11.107 22:44:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:11.107 22:44:11 -- nvmf/common.sh@123 -- # set -e 00:24:11.107 22:44:11 -- nvmf/common.sh@124 -- # return 0 00:24:11.107 22:44:11 -- nvmf/common.sh@477 -- # '[' -n 98789 ']' 00:24:11.107 22:44:11 -- nvmf/common.sh@478 -- # killprocess 98789 00:24:11.107 22:44:11 -- common/autotest_common.sh@936 -- # '[' -z 98789 ']' 00:24:11.107 22:44:11 -- common/autotest_common.sh@940 -- # kill -0 98789 00:24:11.107 22:44:11 -- common/autotest_common.sh@941 -- # uname 00:24:11.107 22:44:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:11.107 22:44:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98789 00:24:11.107 killing process with pid 98789 00:24:11.107 22:44:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:11.107 22:44:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:11.107 22:44:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98789' 00:24:11.107 22:44:11 -- common/autotest_common.sh@955 -- # kill 98789 00:24:11.107 22:44:11 -- common/autotest_common.sh@960 -- # wait 98789 00:24:11.107 22:44:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:11.107 22:44:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:11.107 22:44:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:11.107 22:44:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:11.107 22:44:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:11.107 22:44:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.107 22:44:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:11.107 22:44:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.107 22:44:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:11.107 00:24:11.107 real 1m1.185s 00:24:11.107 user 2m50.183s 00:24:11.107 sys 0m15.085s 00:24:11.107 22:44:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:11.107 22:44:11 -- common/autotest_common.sh@10 -- # set +x 00:24:11.107 ************************************ 00:24:11.107 END TEST nvmf_multipath 00:24:11.107 ************************************ 00:24:11.107 22:44:11 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:11.107 22:44:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:11.107 22:44:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:11.107 22:44:11 -- common/autotest_common.sh@10 -- # set +x 00:24:11.107 ************************************ 00:24:11.107 START TEST nvmf_timeout 00:24:11.107 ************************************ 00:24:11.107 22:44:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:11.368 * Looking for test storage... 00:24:11.368 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:11.368 22:44:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:11.368 22:44:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:11.368 22:44:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:11.368 22:44:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:11.368 22:44:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:11.368 22:44:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:11.368 22:44:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:11.368 22:44:11 -- scripts/common.sh@335 -- # IFS=.-: 00:24:11.368 22:44:11 -- scripts/common.sh@335 -- # read -ra ver1 00:24:11.368 22:44:11 -- scripts/common.sh@336 -- # IFS=.-: 00:24:11.368 22:44:11 -- scripts/common.sh@336 -- # read -ra ver2 00:24:11.368 22:44:11 -- scripts/common.sh@337 -- # local 'op=<' 00:24:11.368 22:44:11 -- scripts/common.sh@339 -- # ver1_l=2 00:24:11.368 22:44:11 -- scripts/common.sh@340 -- # ver2_l=1 00:24:11.368 22:44:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:11.368 22:44:11 -- scripts/common.sh@343 -- # case "$op" in 00:24:11.368 22:44:11 -- scripts/common.sh@344 -- # : 1 00:24:11.368 22:44:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:11.368 22:44:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:11.368 22:44:11 -- scripts/common.sh@364 -- # decimal 1 00:24:11.368 22:44:11 -- scripts/common.sh@352 -- # local d=1 00:24:11.368 22:44:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:11.368 22:44:11 -- scripts/common.sh@354 -- # echo 1 00:24:11.368 22:44:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:11.368 22:44:11 -- scripts/common.sh@365 -- # decimal 2 00:24:11.368 22:44:11 -- scripts/common.sh@352 -- # local d=2 00:24:11.368 22:44:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:11.368 22:44:11 -- scripts/common.sh@354 -- # echo 2 00:24:11.368 22:44:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:11.368 22:44:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:11.368 22:44:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:11.368 22:44:11 -- scripts/common.sh@367 -- # return 0 00:24:11.368 22:44:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:11.368 22:44:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:11.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.368 --rc genhtml_branch_coverage=1 00:24:11.368 --rc genhtml_function_coverage=1 00:24:11.368 --rc genhtml_legend=1 00:24:11.368 --rc geninfo_all_blocks=1 00:24:11.368 --rc geninfo_unexecuted_blocks=1 00:24:11.368 00:24:11.368 ' 00:24:11.368 22:44:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:11.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.368 --rc genhtml_branch_coverage=1 00:24:11.368 --rc genhtml_function_coverage=1 00:24:11.368 --rc genhtml_legend=1 00:24:11.368 --rc geninfo_all_blocks=1 00:24:11.368 --rc geninfo_unexecuted_blocks=1 00:24:11.368 00:24:11.368 ' 00:24:11.368 22:44:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:11.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.368 --rc genhtml_branch_coverage=1 00:24:11.368 --rc genhtml_function_coverage=1 00:24:11.368 --rc genhtml_legend=1 00:24:11.368 --rc geninfo_all_blocks=1 00:24:11.368 --rc geninfo_unexecuted_blocks=1 00:24:11.368 00:24:11.368 ' 00:24:11.368 22:44:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:11.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.368 --rc genhtml_branch_coverage=1 00:24:11.368 --rc genhtml_function_coverage=1 00:24:11.368 --rc genhtml_legend=1 00:24:11.368 --rc geninfo_all_blocks=1 00:24:11.368 --rc geninfo_unexecuted_blocks=1 00:24:11.368 00:24:11.368 ' 00:24:11.368 22:44:11 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:11.368 22:44:11 -- nvmf/common.sh@7 -- # uname -s 00:24:11.368 22:44:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:11.368 22:44:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:11.368 22:44:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:11.368 22:44:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:11.368 22:44:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:11.368 22:44:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:11.368 22:44:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:11.368 22:44:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:11.368 22:44:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:11.368 22:44:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:11.368 22:44:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:24:11.368 22:44:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:24:11.368 22:44:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:11.368 22:44:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:11.368 22:44:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:11.368 22:44:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:11.368 22:44:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.368 22:44:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.368 22:44:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.368 22:44:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.368 22:44:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.368 22:44:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.368 22:44:11 -- paths/export.sh@5 -- # export PATH 00:24:11.368 22:44:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.368 22:44:11 -- nvmf/common.sh@46 -- # : 0 00:24:11.368 22:44:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:11.368 22:44:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:11.368 22:44:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:11.368 22:44:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:11.368 22:44:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:11.368 22:44:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:11.368 22:44:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:11.368 22:44:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:11.368 22:44:11 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:11.368 22:44:11 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:11.368 22:44:11 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:11.368 22:44:11 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:11.368 22:44:11 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:11.368 22:44:11 -- host/timeout.sh@19 -- # nvmftestinit 00:24:11.368 22:44:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:11.368 22:44:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:11.368 22:44:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:11.368 22:44:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:11.368 22:44:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:11.368 22:44:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.368 22:44:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:11.368 22:44:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.368 22:44:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:11.368 22:44:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:11.368 22:44:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:11.368 22:44:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:11.368 22:44:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:11.368 22:44:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:11.368 22:44:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:11.368 22:44:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:11.369 22:44:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:11.369 22:44:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:11.369 22:44:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:11.369 22:44:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:11.369 22:44:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:11.369 22:44:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:11.369 22:44:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:11.369 22:44:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:11.369 22:44:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:11.369 22:44:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:11.369 22:44:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:11.369 22:44:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:11.369 Cannot find device "nvmf_tgt_br" 00:24:11.369 22:44:11 -- nvmf/common.sh@154 -- # true 00:24:11.369 22:44:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:11.369 Cannot find device "nvmf_tgt_br2" 00:24:11.369 22:44:11 -- nvmf/common.sh@155 -- # true 00:24:11.369 22:44:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:11.369 22:44:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:11.369 Cannot find device "nvmf_tgt_br" 00:24:11.369 22:44:11 -- nvmf/common.sh@157 -- # true 00:24:11.369 22:44:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:11.369 Cannot find device "nvmf_tgt_br2" 00:24:11.369 22:44:12 -- nvmf/common.sh@158 -- # true 00:24:11.369 22:44:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:11.369 22:44:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:11.369 22:44:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:11.369 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:11.369 22:44:12 -- nvmf/common.sh@161 -- # true 00:24:11.369 22:44:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:11.369 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:11.369 22:44:12 -- nvmf/common.sh@162 -- # true 00:24:11.369 22:44:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:11.369 22:44:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:11.369 22:44:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:11.369 22:44:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:11.369 22:44:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:11.628 22:44:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:11.628 22:44:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:11.628 22:44:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:11.628 22:44:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:11.628 22:44:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:11.628 22:44:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:11.628 22:44:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:11.628 22:44:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:11.628 22:44:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:11.628 22:44:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:11.628 22:44:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:11.628 22:44:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:11.628 22:44:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:11.628 22:44:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:11.628 22:44:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:11.628 22:44:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:11.628 22:44:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:11.628 22:44:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:11.628 22:44:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:11.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:11.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:24:11.628 00:24:11.628 --- 10.0.0.2 ping statistics --- 00:24:11.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.628 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:24:11.628 22:44:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:11.628 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:11.628 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:24:11.628 00:24:11.628 --- 10.0.0.3 ping statistics --- 00:24:11.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.628 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:24:11.628 22:44:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:11.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:11.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:24:11.628 00:24:11.628 --- 10.0.0.1 ping statistics --- 00:24:11.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.628 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:24:11.628 22:44:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:11.628 22:44:12 -- nvmf/common.sh@421 -- # return 0 00:24:11.628 22:44:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:11.628 22:44:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:11.628 22:44:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:11.628 22:44:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:11.628 22:44:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:11.628 22:44:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:11.628 22:44:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:11.628 22:44:12 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:11.628 22:44:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:11.628 22:44:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:11.628 22:44:12 -- common/autotest_common.sh@10 -- # set +x 00:24:11.628 22:44:12 -- nvmf/common.sh@469 -- # nvmfpid=100159 00:24:11.628 22:44:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:11.628 22:44:12 -- nvmf/common.sh@470 -- # waitforlisten 100159 00:24:11.628 22:44:12 -- common/autotest_common.sh@829 -- # '[' -z 100159 ']' 00:24:11.628 22:44:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.628 22:44:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:11.628 22:44:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.628 22:44:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:11.628 22:44:12 -- common/autotest_common.sh@10 -- # set +x 00:24:11.628 [2024-11-20 22:44:12.343354] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:11.628 [2024-11-20 22:44:12.343446] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.887 [2024-11-20 22:44:12.484451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:11.887 [2024-11-20 22:44:12.570789] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:11.888 [2024-11-20 22:44:12.571254] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.888 [2024-11-20 22:44:12.571425] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.888 [2024-11-20 22:44:12.571522] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.888 [2024-11-20 22:44:12.571791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.888 [2024-11-20 22:44:12.571805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.825 22:44:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:12.825 22:44:13 -- common/autotest_common.sh@862 -- # return 0 00:24:12.825 22:44:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:12.825 22:44:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:12.825 22:44:13 -- common/autotest_common.sh@10 -- # set +x 00:24:12.825 22:44:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.825 22:44:13 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:12.825 22:44:13 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:13.084 [2024-11-20 22:44:13.590729] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.084 22:44:13 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:13.343 Malloc0 00:24:13.343 22:44:13 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:13.601 22:44:14 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:13.602 22:44:14 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:13.860 [2024-11-20 22:44:14.555112] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:13.860 22:44:14 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:13.860 22:44:14 -- host/timeout.sh@32 -- # bdevperf_pid=100250 00:24:13.860 22:44:14 -- host/timeout.sh@34 -- # waitforlisten 100250 /var/tmp/bdevperf.sock 00:24:13.860 22:44:14 -- common/autotest_common.sh@829 -- # '[' -z 100250 ']' 00:24:13.860 22:44:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:13.860 22:44:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:13.860 22:44:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:13.860 22:44:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:13.860 22:44:14 -- common/autotest_common.sh@10 -- # set +x 00:24:14.118 [2024-11-20 22:44:14.610848] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:14.118 [2024-11-20 22:44:14.610932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100250 ] 00:24:14.118 [2024-11-20 22:44:14.742746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.118 [2024-11-20 22:44:14.823972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:15.055 22:44:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:15.055 22:44:15 -- common/autotest_common.sh@862 -- # return 0 00:24:15.055 22:44:15 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:15.055 22:44:15 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:15.315 NVMe0n1 00:24:15.315 22:44:16 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:15.315 22:44:16 -- host/timeout.sh@51 -- # rpc_pid=100302 00:24:15.315 22:44:16 -- host/timeout.sh@53 -- # sleep 1 00:24:15.574 Running I/O for 10 seconds... 00:24:16.516 22:44:17 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:16.517 [2024-11-20 22:44:17.229485] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229555] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229566] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229574] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229581] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229590] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229598] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229605] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229612] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229628] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229635] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229641] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229661] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229668] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229682] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229688] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229695] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229707] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229714] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229720] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229727] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229733] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229741] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229748] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229755] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229761] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229768] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229775] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229781] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229793] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229799] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229806] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229822] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229829] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229835] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229842] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229849] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229855] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229863] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229869] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229882] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229888] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229895] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229902] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229919] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229934] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229940] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229947] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229955] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229962] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229969] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229976] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229982] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229989] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.229996] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.230002] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.230008] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.230015] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.230021] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.230028] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.230035] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.230041] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.230048] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.230055] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.230062] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.230068] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.230075] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.230081] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.230088] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.230094] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.230101] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.230107] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.517 [2024-11-20 22:44:17.230114] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.518 [2024-11-20 22:44:17.230121] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.518 [2024-11-20 22:44:17.230128] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.518 [2024-11-20 22:44:17.230134] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.518 [2024-11-20 22:44:17.230141] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5fa60 is same with the state(5) to be set 00:24:16.518 [2024-11-20 22:44:17.231446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.518 [2024-11-20 22:44:17.231479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.231501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.518 [2024-11-20 22:44:17.231511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.231524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.518 [2024-11-20 22:44:17.231534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.231545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.518 [2024-11-20 22:44:17.231554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.231565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.518 [2024-11-20 22:44:17.231574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.231585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.518 [2024-11-20 22:44:17.231594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.231605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.518 [2024-11-20 22:44:17.231629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.231640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.518 [2024-11-20 22:44:17.231664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.231704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.518 [2024-11-20 22:44:17.231713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.231722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.518 [2024-11-20 22:44:17.231730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.231740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.518 [2024-11-20 22:44:17.231748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.231757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.518 [2024-11-20 22:44:17.231765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.231774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.518 [2024-11-20 22:44:17.231781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.231792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.518 [2024-11-20 22:44:17.231800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.231810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.518 [2024-11-20 22:44:17.231818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.231827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.518 [2024-11-20 22:44:17.231836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.231845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.518 [2024-11-20 22:44:17.231854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.231864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.518 [2024-11-20 22:44:17.231872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.231881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.518 [2024-11-20 22:44:17.231888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.231897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.518 [2024-11-20 22:44:17.231905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.231914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.518 [2024-11-20 22:44:17.231922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.231931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.518 [2024-11-20 22:44:17.231939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.231948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.518 [2024-11-20 22:44:17.231956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.231965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.518 [2024-11-20 22:44:17.231972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.231981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.518 [2024-11-20 22:44:17.231989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.231998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.518 [2024-11-20 22:44:17.232006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.232015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.518 [2024-11-20 22:44:17.232023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.232032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.518 [2024-11-20 22:44:17.232040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.232049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.518 [2024-11-20 22:44:17.232056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.232066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.518 [2024-11-20 22:44:17.232073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.518 [2024-11-20 22:44:17.232097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.519 [2024-11-20 22:44:17.232105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.519 [2024-11-20 22:44:17.232122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.519 [2024-11-20 22:44:17.232140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.519 [2024-11-20 22:44:17.232158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.519 [2024-11-20 22:44:17.232780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.519 [2024-11-20 22:44:17.232789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.232799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.520 [2024-11-20 22:44:17.232807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.232816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.520 [2024-11-20 22:44:17.232824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.232834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.520 [2024-11-20 22:44:17.232842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.232853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.520 [2024-11-20 22:44:17.232861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.232871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.520 [2024-11-20 22:44:17.232879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.232889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.520 [2024-11-20 22:44:17.232897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.232906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.520 [2024-11-20 22:44:17.232915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.232925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.520 [2024-11-20 22:44:17.232933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.232942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.520 [2024-11-20 22:44:17.232950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.232960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.520 [2024-11-20 22:44:17.232968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.232978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.520 [2024-11-20 22:44:17.232986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.232995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.520 [2024-11-20 22:44:17.233003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.233013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.520 [2024-11-20 22:44:17.233021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.233031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.520 [2024-11-20 22:44:17.233039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.233048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.520 [2024-11-20 22:44:17.233056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.233066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.520 [2024-11-20 22:44:17.233074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.233084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.520 [2024-11-20 22:44:17.233092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.233102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.520 [2024-11-20 22:44:17.233110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.233119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.520 [2024-11-20 22:44:17.233127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.233137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.520 [2024-11-20 22:44:17.233145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.233155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.520 [2024-11-20 22:44:17.233164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.233180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.520 [2024-11-20 22:44:17.233194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.233209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.520 [2024-11-20 22:44:17.233223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.233238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.520 [2024-11-20 22:44:17.233251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.233265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.520 [2024-11-20 22:44:17.233277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.233325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.520 [2024-11-20 22:44:17.233340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.233379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.520 [2024-11-20 22:44:17.233395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.233411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.520 [2024-11-20 22:44:17.233426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.233444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.520 [2024-11-20 22:44:17.233454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.233464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.520 [2024-11-20 22:44:17.233473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.233484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.520 [2024-11-20 22:44:17.233493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.233504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.520 [2024-11-20 22:44:17.233512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.233523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.520 [2024-11-20 22:44:17.233532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.233542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.520 [2024-11-20 22:44:17.233551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.520 [2024-11-20 22:44:17.233562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.520 [2024-11-20 22:44:17.233571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.233581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.521 [2024-11-20 22:44:17.233590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.233601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.521 [2024-11-20 22:44:17.233610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.233621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.521 [2024-11-20 22:44:17.233630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.233640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.521 [2024-11-20 22:44:17.233665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.233690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.521 [2024-11-20 22:44:17.233699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.233708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.521 [2024-11-20 22:44:17.233718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.233727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.521 [2024-11-20 22:44:17.233736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.233746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.521 [2024-11-20 22:44:17.233754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.233764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.521 [2024-11-20 22:44:17.233772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.233782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.521 [2024-11-20 22:44:17.233790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.233800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.521 [2024-11-20 22:44:17.233809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.233819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.521 [2024-11-20 22:44:17.233827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.233837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.521 [2024-11-20 22:44:17.233845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.233855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.521 [2024-11-20 22:44:17.233863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.233873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.521 [2024-11-20 22:44:17.233882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.233898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.521 [2024-11-20 22:44:17.233906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.233946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.521 [2024-11-20 22:44:17.233956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.233968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.521 [2024-11-20 22:44:17.233977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.233989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.521 [2024-11-20 22:44:17.233998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.234009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.521 [2024-11-20 22:44:17.234018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.234029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.521 [2024-11-20 22:44:17.234038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.234049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.521 [2024-11-20 22:44:17.234058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.234069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.521 [2024-11-20 22:44:17.234078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.234088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.521 [2024-11-20 22:44:17.234097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.234108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.521 [2024-11-20 22:44:17.234117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.234128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.521 [2024-11-20 22:44:17.234137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.234149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.521 [2024-11-20 22:44:17.234158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.234170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.521 [2024-11-20 22:44:17.234179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.234190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.521 [2024-11-20 22:44:17.234199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.234210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.521 [2024-11-20 22:44:17.234219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.234241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.521 [2024-11-20 22:44:17.234262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.234279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.521 [2024-11-20 22:44:17.234316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.521 [2024-11-20 22:44:17.234389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.521 [2024-11-20 22:44:17.234401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.521 [2024-11-20 22:44:17.234409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:8 PRP1 0x0 PRP2 0x0 00:24:16.522 [2024-11-20 22:44:17.234418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.522 [2024-11-20 22:44:17.234470] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24b3b80 was disconnected and freed. reset controller. 00:24:16.522 [2024-11-20 22:44:17.234742] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.522 [2024-11-20 22:44:17.234817] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2482250 (9): Bad file descriptor 00:24:16.522 [2024-11-20 22:44:17.234910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.522 [2024-11-20 22:44:17.234962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.522 [2024-11-20 22:44:17.234979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2482250 with addr=10.0.0.2, port=4420 00:24:16.522 [2024-11-20 22:44:17.234989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2482250 is same with the state(5) to be set 00:24:16.522 [2024-11-20 22:44:17.235006] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2482250 (9): Bad file descriptor 00:24:16.522 [2024-11-20 22:44:17.235021] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.522 [2024-11-20 22:44:17.235030] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.522 [2024-11-20 22:44:17.235039] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.522 [2024-11-20 22:44:17.235057] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.522 [2024-11-20 22:44:17.235068] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.781 22:44:17 -- host/timeout.sh@56 -- # sleep 2 00:24:18.690 [2024-11-20 22:44:19.235149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.690 [2024-11-20 22:44:19.235216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.690 [2024-11-20 22:44:19.235232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2482250 with addr=10.0.0.2, port=4420 00:24:18.690 [2024-11-20 22:44:19.235242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2482250 is same with the state(5) to be set 00:24:18.690 [2024-11-20 22:44:19.235261] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2482250 (9): Bad file descriptor 00:24:18.690 [2024-11-20 22:44:19.235287] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.690 [2024-11-20 22:44:19.235296] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.690 [2024-11-20 22:44:19.235321] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.690 [2024-11-20 22:44:19.235344] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.690 [2024-11-20 22:44:19.235353] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.690 22:44:19 -- host/timeout.sh@57 -- # get_controller 00:24:18.690 22:44:19 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:18.690 22:44:19 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:18.950 22:44:19 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:18.950 22:44:19 -- host/timeout.sh@58 -- # get_bdev 00:24:18.950 22:44:19 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:18.950 22:44:19 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:19.209 22:44:19 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:19.209 22:44:19 -- host/timeout.sh@61 -- # sleep 5 00:24:20.587 [2024-11-20 22:44:21.235499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-11-20 22:44:21.235568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-11-20 22:44:21.235585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2482250 with addr=10.0.0.2, port=4420 00:24:20.587 [2024-11-20 22:44:21.235597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2482250 is same with the state(5) to be set 00:24:20.587 [2024-11-20 22:44:21.235618] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2482250 (9): Bad file descriptor 00:24:20.587 [2024-11-20 22:44:21.235635] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:20.587 [2024-11-20 22:44:21.235643] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:20.587 [2024-11-20 22:44:21.235653] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:20.587 [2024-11-20 22:44:21.235677] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:20.587 [2024-11-20 22:44:21.235687] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.121 [2024-11-20 22:44:23.235723] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.121 [2024-11-20 22:44:23.235754] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.121 [2024-11-20 22:44:23.235763] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.121 [2024-11-20 22:44:23.235771] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:23.121 [2024-11-20 22:44:23.235793] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.690 00:24:23.690 Latency(us) 00:24:23.690 [2024-11-20T22:44:24.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.690 [2024-11-20T22:44:24.424Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:23.690 Verification LBA range: start 0x0 length 0x4000 00:24:23.690 NVMe0n1 : 8.12 2237.23 8.74 15.77 0.00 56740.74 2338.44 7015926.69 00:24:23.690 [2024-11-20T22:44:24.424Z] =================================================================================================================== 00:24:23.690 [2024-11-20T22:44:24.424Z] Total : 2237.23 8.74 15.77 0.00 56740.74 2338.44 7015926.69 00:24:23.690 0 00:24:24.258 22:44:24 -- host/timeout.sh@62 -- # get_controller 00:24:24.258 22:44:24 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:24.258 22:44:24 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:24.518 22:44:25 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:24.518 22:44:25 -- host/timeout.sh@63 -- # get_bdev 00:24:24.518 22:44:25 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:24.518 22:44:25 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:24.777 22:44:25 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:24.777 22:44:25 -- host/timeout.sh@65 -- # wait 100302 00:24:24.777 22:44:25 -- host/timeout.sh@67 -- # killprocess 100250 00:24:24.777 22:44:25 -- common/autotest_common.sh@936 -- # '[' -z 100250 ']' 00:24:24.777 22:44:25 -- common/autotest_common.sh@940 -- # kill -0 100250 00:24:24.777 22:44:25 -- common/autotest_common.sh@941 -- # uname 00:24:24.777 22:44:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:24.777 22:44:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100250 00:24:24.777 killing process with pid 100250 00:24:24.777 Received shutdown signal, test time was about 9.225533 seconds 00:24:24.777 00:24:24.777 Latency(us) 00:24:24.777 [2024-11-20T22:44:25.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.777 [2024-11-20T22:44:25.511Z] =================================================================================================================== 00:24:24.777 [2024-11-20T22:44:25.511Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:24.777 22:44:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:24.777 22:44:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:24.777 22:44:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100250' 00:24:24.777 22:44:25 -- common/autotest_common.sh@955 -- # kill 100250 00:24:24.777 22:44:25 -- common/autotest_common.sh@960 -- # wait 100250 00:24:25.037 22:44:25 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:25.296 [2024-11-20 22:44:25.778003] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:25.296 22:44:25 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:25.296 22:44:25 -- host/timeout.sh@74 -- # bdevperf_pid=100455 00:24:25.296 22:44:25 -- host/timeout.sh@76 -- # waitforlisten 100455 /var/tmp/bdevperf.sock 00:24:25.296 22:44:25 -- common/autotest_common.sh@829 -- # '[' -z 100455 ']' 00:24:25.296 22:44:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:25.296 22:44:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:25.296 22:44:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:25.296 22:44:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:25.296 22:44:25 -- common/autotest_common.sh@10 -- # set +x 00:24:25.296 [2024-11-20 22:44:25.831461] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:25.296 [2024-11-20 22:44:25.831529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100455 ] 00:24:25.296 [2024-11-20 22:44:25.958126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.296 [2024-11-20 22:44:26.025138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.235 22:44:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:26.236 22:44:26 -- common/autotest_common.sh@862 -- # return 0 00:24:26.236 22:44:26 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:26.496 22:44:27 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:26.756 NVMe0n1 00:24:26.756 22:44:27 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:26.756 22:44:27 -- host/timeout.sh@84 -- # rpc_pid=100504 00:24:26.756 22:44:27 -- host/timeout.sh@86 -- # sleep 1 00:24:26.756 Running I/O for 10 seconds... 00:24:27.694 22:44:28 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.991 [2024-11-20 22:44:28.598625] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598696] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598706] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598714] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598722] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598729] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598736] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598744] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598751] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598758] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598765] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598772] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598779] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598793] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598801] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598807] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598814] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598821] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598828] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598835] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598841] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598848] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598854] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598861] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598868] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598875] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598882] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598890] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598898] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598906] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598914] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598930] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.991 [2024-11-20 22:44:28.598938] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.598946] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.598953] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.598961] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.598968] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.598976] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.598985] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.598993] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599000] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599007] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599014] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599022] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599030] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599037] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599043] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599050] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599057] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599065] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599072] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599079] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599086] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599094] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599102] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599108] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599115] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599122] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599129] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599136] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599143] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599149] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599156] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599162] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599172] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599179] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599186] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63d90 is same with the state(5) to be set 00:24:27.992 [2024-11-20 22:44:28.599540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.992 [2024-11-20 22:44:28.599572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.992 [2024-11-20 22:44:28.599595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.992 [2024-11-20 22:44:28.599606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.992 [2024-11-20 22:44:28.599634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.992 [2024-11-20 22:44:28.599659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.992 [2024-11-20 22:44:28.599684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.992 [2024-11-20 22:44:28.599708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.992 [2024-11-20 22:44:28.599717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.992 [2024-11-20 22:44:28.599725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.992 [2024-11-20 22:44:28.599734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.992 [2024-11-20 22:44:28.599742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.992 [2024-11-20 22:44:28.599752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.992 [2024-11-20 22:44:28.599759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.992 [2024-11-20 22:44:28.599769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.992 [2024-11-20 22:44:28.599776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.992 [2024-11-20 22:44:28.599785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.992 [2024-11-20 22:44:28.599793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.992 [2024-11-20 22:44:28.599802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.992 [2024-11-20 22:44:28.599809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.992 [2024-11-20 22:44:28.599818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.992 [2024-11-20 22:44:28.599826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.992 [2024-11-20 22:44:28.599835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.992 [2024-11-20 22:44:28.599843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.992 [2024-11-20 22:44:28.599852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.992 [2024-11-20 22:44:28.599859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.992 [2024-11-20 22:44:28.599868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.992 [2024-11-20 22:44:28.599876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.992 [2024-11-20 22:44:28.599885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.992 [2024-11-20 22:44:28.599892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.992 [2024-11-20 22:44:28.599901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.992 [2024-11-20 22:44:28.599910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.992 [2024-11-20 22:44:28.599919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.992 [2024-11-20 22:44:28.599927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.992 [2024-11-20 22:44:28.599937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.992 [2024-11-20 22:44:28.599945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.992 [2024-11-20 22:44:28.599954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.992 [2024-11-20 22:44:28.599962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.992 [2024-11-20 22:44:28.599971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.992 [2024-11-20 22:44:28.599978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.599987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.599995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.600682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.600707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.601618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.601642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.601684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.993 [2024-11-20 22:44:28.601708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.993 [2024-11-20 22:44:28.601718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.994 [2024-11-20 22:44:28.601725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.601735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.994 [2024-11-20 22:44:28.601743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.601753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.994 [2024-11-20 22:44:28.601761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.601771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.994 [2024-11-20 22:44:28.601779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.601788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.994 [2024-11-20 22:44:28.601812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.601822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.994 [2024-11-20 22:44:28.601830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.601840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.994 [2024-11-20 22:44:28.601847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.601857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.994 [2024-11-20 22:44:28.601865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.601874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.994 [2024-11-20 22:44:28.601887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.601897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.994 [2024-11-20 22:44:28.601905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.601914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.994 [2024-11-20 22:44:28.601933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.601961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.994 [2024-11-20 22:44:28.601969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.601980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.994 [2024-11-20 22:44:28.602003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.602013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.994 [2024-11-20 22:44:28.602020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.602030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.994 [2024-11-20 22:44:28.602038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.602048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.994 [2024-11-20 22:44:28.602056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.602065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.994 [2024-11-20 22:44:28.602073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.602083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.994 [2024-11-20 22:44:28.602091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.602101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.994 [2024-11-20 22:44:28.602108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.602118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.994 [2024-11-20 22:44:28.602132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.602143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.994 [2024-11-20 22:44:28.602151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.602160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.994 [2024-11-20 22:44:28.602168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.602178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.994 [2024-11-20 22:44:28.602186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.602196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.994 [2024-11-20 22:44:28.602204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.602214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.994 [2024-11-20 22:44:28.602226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.602236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.994 [2024-11-20 22:44:28.602256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.602266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.994 [2024-11-20 22:44:28.602274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.602296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.994 [2024-11-20 22:44:28.602320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.602330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.994 [2024-11-20 22:44:28.602338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.602375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.994 [2024-11-20 22:44:28.602384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.602394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.994 [2024-11-20 22:44:28.602402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.602412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.994 [2024-11-20 22:44:28.602420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.602430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.994 [2024-11-20 22:44:28.602438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.602447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.994 [2024-11-20 22:44:28.602455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.602465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.994 [2024-11-20 22:44:28.602473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.602482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.994 [2024-11-20 22:44:28.602495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.994 [2024-11-20 22:44:28.602512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.995 [2024-11-20 22:44:28.602520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.602530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.995 [2024-11-20 22:44:28.602538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.602548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.995 [2024-11-20 22:44:28.602556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.602566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.995 [2024-11-20 22:44:28.602574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.602584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.995 [2024-11-20 22:44:28.602597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.602607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.995 [2024-11-20 22:44:28.602616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.602625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.995 [2024-11-20 22:44:28.602634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.602644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.995 [2024-11-20 22:44:28.602668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.602692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.995 [2024-11-20 22:44:28.602700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.602709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.995 [2024-11-20 22:44:28.602717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.602725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.995 [2024-11-20 22:44:28.602733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.602742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.995 [2024-11-20 22:44:28.602749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.602758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.995 [2024-11-20 22:44:28.602766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.602775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.995 [2024-11-20 22:44:28.602783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.602792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.995 [2024-11-20 22:44:28.602799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.602808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.995 [2024-11-20 22:44:28.602820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.602829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.995 [2024-11-20 22:44:28.602836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.602846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.995 [2024-11-20 22:44:28.602853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.602862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.995 [2024-11-20 22:44:28.602870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.602879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.995 [2024-11-20 22:44:28.602886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.602895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.995 [2024-11-20 22:44:28.602907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.602916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.995 [2024-11-20 22:44:28.602924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.602933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.995 [2024-11-20 22:44:28.602940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.602949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.995 [2024-11-20 22:44:28.602957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.602966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.995 [2024-11-20 22:44:28.602973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.602984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.995 [2024-11-20 22:44:28.602992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.603001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.995 [2024-11-20 22:44:28.603008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.603017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.995 [2024-11-20 22:44:28.603024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.603033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.995 [2024-11-20 22:44:28.603041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.603050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.995 [2024-11-20 22:44:28.603057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.603066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.995 [2024-11-20 22:44:28.603073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.603082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.995 [2024-11-20 22:44:28.603095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.603104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.995 [2024-11-20 22:44:28.603111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.603121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.995 [2024-11-20 22:44:28.603128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.603137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.995 [2024-11-20 22:44:28.603144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.995 [2024-11-20 22:44:28.603152] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a509c0 is same with the state(5) to be set 00:24:27.995 [2024-11-20 22:44:28.603163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:27.995 [2024-11-20 22:44:28.603170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:27.995 [2024-11-20 22:44:28.603182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21376 len:8 PRP1 0x0 PRP2 0x0 00:24:27.996 [2024-11-20 22:44:28.603190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.996 [2024-11-20 22:44:28.603255] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a509c0 was disconnected and freed. reset controller. 00:24:27.996 [2024-11-20 22:44:28.603516] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.996 [2024-11-20 22:44:28.603592] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1f090 (9): Bad file descriptor 00:24:27.996 [2024-11-20 22:44:28.603701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.996 [2024-11-20 22:44:28.603745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.996 [2024-11-20 22:44:28.603760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1f090 with addr=10.0.0.2, port=4420 00:24:27.996 [2024-11-20 22:44:28.603769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1f090 is same with the state(5) to be set 00:24:27.996 [2024-11-20 22:44:28.603785] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1f090 (9): Bad file descriptor 00:24:27.996 [2024-11-20 22:44:28.603800] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.996 [2024-11-20 22:44:28.603810] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.996 [2024-11-20 22:44:28.603819] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.996 [2024-11-20 22:44:28.603837] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.996 [2024-11-20 22:44:28.603847] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.996 22:44:28 -- host/timeout.sh@90 -- # sleep 1 00:24:28.965 [2024-11-20 22:44:29.603935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.965 [2024-11-20 22:44:29.604017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.965 [2024-11-20 22:44:29.604033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1f090 with addr=10.0.0.2, port=4420 00:24:28.965 [2024-11-20 22:44:29.604044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1f090 is same with the state(5) to be set 00:24:28.965 [2024-11-20 22:44:29.604065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1f090 (9): Bad file descriptor 00:24:28.965 [2024-11-20 22:44:29.604082] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.965 [2024-11-20 22:44:29.604091] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.965 [2024-11-20 22:44:29.604100] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.965 [2024-11-20 22:44:29.604122] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.965 [2024-11-20 22:44:29.604133] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.965 22:44:29 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:29.224 [2024-11-20 22:44:29.880295] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.224 22:44:29 -- host/timeout.sh@92 -- # wait 100504 00:24:30.161 [2024-11-20 22:44:30.617055] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:36.734 00:24:36.734 Latency(us) 00:24:36.734 [2024-11-20T22:44:37.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.734 [2024-11-20T22:44:37.468Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:36.734 Verification LBA range: start 0x0 length 0x4000 00:24:36.734 NVMe0n1 : 10.01 11537.00 45.07 0.00 0.00 11076.16 1079.85 3019898.88 00:24:36.734 [2024-11-20T22:44:37.468Z] =================================================================================================================== 00:24:36.734 [2024-11-20T22:44:37.468Z] Total : 11537.00 45.07 0.00 0.00 11076.16 1079.85 3019898.88 00:24:36.734 0 00:24:36.734 22:44:37 -- host/timeout.sh@97 -- # rpc_pid=100623 00:24:36.734 22:44:37 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:36.734 22:44:37 -- host/timeout.sh@98 -- # sleep 1 00:24:36.993 Running I/O for 10 seconds... 00:24:37.930 22:44:38 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.194 [2024-11-20 22:44:38.701383] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701456] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701466] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701474] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701482] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701490] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701498] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701505] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701512] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701520] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701527] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701535] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701542] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701549] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701556] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701563] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701570] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701587] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701594] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701607] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701614] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701621] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701627] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701637] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701644] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701659] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701666] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701683] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701690] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701709] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701718] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701725] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701732] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701739] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701746] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701754] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701760] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701767] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701774] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701781] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701788] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701795] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701803] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701810] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701816] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701823] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701829] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701836] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701843] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701850] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701856] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701864] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701871] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701879] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701887] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701895] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701903] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701910] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701917] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701924] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701931] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701966] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701974] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701982] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701989] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.701997] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.702004] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.702011] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.702018] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd9c0 is same with the state(5) to be set 00:24:38.194 [2024-11-20 22:44:38.702385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.194 [2024-11-20 22:44:38.702424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.194 [2024-11-20 22:44:38.702446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.702458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.702470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.702479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.702491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.702500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.702512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.702521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.702532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.702541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.702552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.702561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.702572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.702581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.702592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.702616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.702641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.702664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.702694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.702717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.702727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.702735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.702746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.702754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.702764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.703185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.703691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.703718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.703730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.703738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.703749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.703758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.703768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.703777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.703787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.703795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.703820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.703843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.703853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.703862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.703872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.703880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.703890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.703899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.703909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.195 [2024-11-20 22:44:38.703917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.703927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.195 [2024-11-20 22:44:38.703935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.703962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.703971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.703982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.195 [2024-11-20 22:44:38.704005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.704032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.195 [2024-11-20 22:44:38.704040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.704051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.704059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.704069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.195 [2024-11-20 22:44:38.704078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.704088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.195 [2024-11-20 22:44:38.704097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.704107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.704115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.704125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.704140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.704151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.704160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.704170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.704178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.704204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.704213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.704223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.704232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.704518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.704536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.704548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.195 [2024-11-20 22:44:38.704557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.705026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.195 [2024-11-20 22:44:38.705114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.195 [2024-11-20 22:44:38.705128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.196 [2024-11-20 22:44:38.705546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.196 [2024-11-20 22:44:38.705567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.196 [2024-11-20 22:44:38.705587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.196 [2024-11-20 22:44:38.705626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.196 [2024-11-20 22:44:38.705662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.196 [2024-11-20 22:44:38.705685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.196 [2024-11-20 22:44:38.705759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.196 [2024-11-20 22:44:38.705815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.705887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.705898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.706226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.706459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.706471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.706482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.706493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.706504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.706513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.706524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.196 [2024-11-20 22:44:38.706532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.196 [2024-11-20 22:44:38.706739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.197 [2024-11-20 22:44:38.706760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.706772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.197 [2024-11-20 22:44:38.706781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.706807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.197 [2024-11-20 22:44:38.706815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.706825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.197 [2024-11-20 22:44:38.706834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.706960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.197 [2024-11-20 22:44:38.706970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.707113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.197 [2024-11-20 22:44:38.707209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.707224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.197 [2024-11-20 22:44:38.707233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.707244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.197 [2024-11-20 22:44:38.707253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.707263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.197 [2024-11-20 22:44:38.707382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.707402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.197 [2024-11-20 22:44:38.707412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.707423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.197 [2024-11-20 22:44:38.707432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.707443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.197 [2024-11-20 22:44:38.707452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.707463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.197 [2024-11-20 22:44:38.707472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.707483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.197 [2024-11-20 22:44:38.707492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.707593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.197 [2024-11-20 22:44:38.707604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.707616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.197 [2024-11-20 22:44:38.707624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.707636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.197 [2024-11-20 22:44:38.707755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.707769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.197 [2024-11-20 22:44:38.707778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.707788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.197 [2024-11-20 22:44:38.707798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.707808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.197 [2024-11-20 22:44:38.707926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.707938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.197 [2024-11-20 22:44:38.707947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.707958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.197 [2024-11-20 22:44:38.708046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.708060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.197 [2024-11-20 22:44:38.708069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.708080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.197 [2024-11-20 22:44:38.708088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.708099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.197 [2024-11-20 22:44:38.708193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.708210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.197 [2024-11-20 22:44:38.708218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.708229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.197 [2024-11-20 22:44:38.708238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.708248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.197 [2024-11-20 22:44:38.708353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.708372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.197 [2024-11-20 22:44:38.708382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.708393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.197 [2024-11-20 22:44:38.708402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.708413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.197 [2024-11-20 22:44:38.708422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.708433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.197 [2024-11-20 22:44:38.708529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.708546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.197 [2024-11-20 22:44:38.708561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.708572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.197 [2024-11-20 22:44:38.708581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.708592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.197 [2024-11-20 22:44:38.708601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.708734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.197 [2024-11-20 22:44:38.708744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.708874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.197 [2024-11-20 22:44:38.708885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.709000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.197 [2024-11-20 22:44:38.709010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.709021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.197 [2024-11-20 22:44:38.709029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.709166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.197 [2024-11-20 22:44:38.709258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.197 [2024-11-20 22:44:38.709271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.198 [2024-11-20 22:44:38.709307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.198 [2024-11-20 22:44:38.709412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.198 [2024-11-20 22:44:38.709424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.198 [2024-11-20 22:44:38.709435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.198 [2024-11-20 22:44:38.709444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.198 [2024-11-20 22:44:38.709547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.198 [2024-11-20 22:44:38.709557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.198 [2024-11-20 22:44:38.709568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.198 [2024-11-20 22:44:38.709577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.198 [2024-11-20 22:44:38.709588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.198 [2024-11-20 22:44:38.709854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.198 [2024-11-20 22:44:38.709978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.198 [2024-11-20 22:44:38.709993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.198 [2024-11-20 22:44:38.710004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d960 is same with the state(5) to be set 00:24:38.198 [2024-11-20 22:44:38.710128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:38.198 [2024-11-20 22:44:38.710145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:38.198 [2024-11-20 22:44:38.710155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15336 len:8 PRP1 0x0 PRP2 0x0 00:24:38.198 [2024-11-20 22:44:38.710255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.198 [2024-11-20 22:44:38.710556] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a4d960 was disconnected and freed. reset controller. 00:24:38.198 [2024-11-20 22:44:38.710827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.198 [2024-11-20 22:44:38.710850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.198 [2024-11-20 22:44:38.710861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.198 [2024-11-20 22:44:38.710869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.198 [2024-11-20 22:44:38.710879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.198 [2024-11-20 22:44:38.710887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.198 [2024-11-20 22:44:38.710897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.198 [2024-11-20 22:44:38.710905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.198 [2024-11-20 22:44:38.710913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1f090 is same with the state(5) to be set 00:24:38.198 [2024-11-20 22:44:38.711351] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.198 [2024-11-20 22:44:38.711396] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1f090 (9): Bad file descriptor 00:24:38.198 [2024-11-20 22:44:38.711588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.198 [2024-11-20 22:44:38.711853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.198 [2024-11-20 22:44:38.711881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1f090 with addr=10.0.0.2, port=4420 00:24:38.198 [2024-11-20 22:44:38.711892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1f090 is same with the state(5) to be set 00:24:38.198 [2024-11-20 22:44:38.711912] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1f090 (9): Bad file descriptor 00:24:38.198 [2024-11-20 22:44:38.711929] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.198 [2024-11-20 22:44:38.711939] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.198 [2024-11-20 22:44:38.711948] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.198 [2024-11-20 22:44:38.712170] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.198 [2024-11-20 22:44:38.712194] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.198 22:44:38 -- host/timeout.sh@101 -- # sleep 3 00:24:39.134 [2024-11-20 22:44:39.712284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.134 [2024-11-20 22:44:39.712384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.134 [2024-11-20 22:44:39.712401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1f090 with addr=10.0.0.2, port=4420 00:24:39.134 [2024-11-20 22:44:39.712412] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1f090 is same with the state(5) to be set 00:24:39.134 [2024-11-20 22:44:39.712431] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1f090 (9): Bad file descriptor 00:24:39.134 [2024-11-20 22:44:39.712448] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.134 [2024-11-20 22:44:39.712457] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.134 [2024-11-20 22:44:39.712466] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.134 [2024-11-20 22:44:39.712489] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.134 [2024-11-20 22:44:39.712499] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.070 [2024-11-20 22:44:40.712566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.070 [2024-11-20 22:44:40.712647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.070 [2024-11-20 22:44:40.712664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1f090 with addr=10.0.0.2, port=4420 00:24:40.070 [2024-11-20 22:44:40.712674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1f090 is same with the state(5) to be set 00:24:40.070 [2024-11-20 22:44:40.712692] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1f090 (9): Bad file descriptor 00:24:40.070 [2024-11-20 22:44:40.712709] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.070 [2024-11-20 22:44:40.712717] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.070 [2024-11-20 22:44:40.712725] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.070 [2024-11-20 22:44:40.712743] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.070 [2024-11-20 22:44:40.712753] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.007 [2024-11-20 22:44:41.714102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.007 [2024-11-20 22:44:41.714189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.007 [2024-11-20 22:44:41.714208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1f090 with addr=10.0.0.2, port=4420 00:24:41.007 [2024-11-20 22:44:41.714218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1f090 is same with the state(5) to be set 00:24:41.007 [2024-11-20 22:44:41.714392] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1f090 (9): Bad file descriptor 00:24:41.007 [2024-11-20 22:44:41.714869] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.007 [2024-11-20 22:44:41.714914] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.007 [2024-11-20 22:44:41.714939] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.007 [2024-11-20 22:44:41.717372] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.007 [2024-11-20 22:44:41.717414] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.007 22:44:41 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:41.267 [2024-11-20 22:44:41.978256] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.526 22:44:41 -- host/timeout.sh@103 -- # wait 100623 00:24:42.093 [2024-11-20 22:44:42.741642] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:47.363 00:24:47.363 Latency(us) 00:24:47.363 [2024-11-20T22:44:48.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.363 [2024-11-20T22:44:48.097Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:47.363 Verification LBA range: start 0x0 length 0x4000 00:24:47.363 NVMe0n1 : 10.01 9865.06 38.54 7035.92 0.00 7557.05 444.97 3019898.88 00:24:47.363 [2024-11-20T22:44:48.097Z] =================================================================================================================== 00:24:47.363 [2024-11-20T22:44:48.097Z] Total : 9865.06 38.54 7035.92 0.00 7557.05 0.00 3019898.88 00:24:47.363 0 00:24:47.363 22:44:47 -- host/timeout.sh@105 -- # killprocess 100455 00:24:47.363 22:44:47 -- common/autotest_common.sh@936 -- # '[' -z 100455 ']' 00:24:47.363 22:44:47 -- common/autotest_common.sh@940 -- # kill -0 100455 00:24:47.363 22:44:47 -- common/autotest_common.sh@941 -- # uname 00:24:47.363 22:44:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:47.363 22:44:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100455 00:24:47.363 killing process with pid 100455 00:24:47.363 Received shutdown signal, test time was about 10.000000 seconds 00:24:47.363 00:24:47.363 Latency(us) 00:24:47.363 [2024-11-20T22:44:48.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.363 [2024-11-20T22:44:48.097Z] =================================================================================================================== 00:24:47.364 [2024-11-20T22:44:48.098Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:47.364 22:44:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:47.364 22:44:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:47.364 22:44:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100455' 00:24:47.364 22:44:47 -- common/autotest_common.sh@955 -- # kill 100455 00:24:47.364 22:44:47 -- common/autotest_common.sh@960 -- # wait 100455 00:24:47.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:47.364 22:44:47 -- host/timeout.sh@110 -- # bdevperf_pid=100749 00:24:47.364 22:44:47 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:24:47.364 22:44:47 -- host/timeout.sh@112 -- # waitforlisten 100749 /var/tmp/bdevperf.sock 00:24:47.364 22:44:47 -- common/autotest_common.sh@829 -- # '[' -z 100749 ']' 00:24:47.364 22:44:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:47.364 22:44:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:47.364 22:44:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:47.364 22:44:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:47.364 22:44:47 -- common/autotest_common.sh@10 -- # set +x 00:24:47.364 [2024-11-20 22:44:47.886255] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:47.364 [2024-11-20 22:44:47.886371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100749 ] 00:24:47.364 [2024-11-20 22:44:48.020234] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.364 [2024-11-20 22:44:48.076517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:48.300 22:44:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:48.300 22:44:48 -- common/autotest_common.sh@862 -- # return 0 00:24:48.300 22:44:48 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 100749 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:24:48.300 22:44:48 -- host/timeout.sh@116 -- # dtrace_pid=100777 00:24:48.300 22:44:48 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:24:48.560 22:44:49 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:48.818 NVMe0n1 00:24:48.818 22:44:49 -- host/timeout.sh@124 -- # rpc_pid=100826 00:24:48.818 22:44:49 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:48.818 22:44:49 -- host/timeout.sh@125 -- # sleep 1 00:24:48.818 Running I/O for 10 seconds... 00:24:49.753 22:44:50 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:50.016 [2024-11-20 22:44:50.635643] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635713] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635724] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635738] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635746] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635753] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635760] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635768] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635775] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635782] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635789] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635797] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635804] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635811] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635818] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635826] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635833] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635840] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635847] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635853] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635867] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635873] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635881] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635888] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635894] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635902] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635915] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635937] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635944] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635951] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635959] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635966] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635973] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635989] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.635995] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636012] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636020] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636027] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636034] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636041] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636049] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636056] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636063] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636071] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636080] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636094] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636101] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636109] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636117] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636123] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636130] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636138] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636145] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636151] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636158] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636164] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636170] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636177] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636183] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636189] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636196] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636202] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636209] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636216] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636222] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636229] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636235] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636242] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636249] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636255] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636263] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636270] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636297] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636306] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.636312] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0db0 is same with the state(5) to be set 00:24:50.016 [2024-11-20 22:44:50.637230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.016 [2024-11-20 22:44:50.637297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.016 [2024-11-20 22:44:50.637352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.637365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.637376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:124424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.637386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.637397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.637407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.637417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.637427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.637437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.637446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.637457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.637466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.637477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.637934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.638000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.638012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.638024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.638033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.638044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.638054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.638065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.638074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.638084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.638093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.638104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:58512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.638127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.638372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.638387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.638398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.638407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.638419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.638430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.638697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.638719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.638732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.638741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.638753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.638762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.638772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.638780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.638791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.638800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.638810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.638818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.638829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.638952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.639084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.639097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.639210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.639223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.639233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.639243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.639524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.639626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.639659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:28264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.639683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.639693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.639703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.639815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.639826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.639836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.639845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.639856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.639983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.639997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.640129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.640143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.640250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.640264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:53680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.640273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.640430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.640441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.640452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.640583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.640729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.640739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.640750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.640758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.017 [2024-11-20 22:44:50.640769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.017 [2024-11-20 22:44:50.640777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.640788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.640796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.640807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.640816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.640826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.640835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.640896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.640909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.640920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.640929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.640940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.640948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.640958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.640967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.641083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.641097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.641107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.641116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.641375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.641398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.641534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.641547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.641557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.641683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.641704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:56808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.641806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.641827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.641836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.641847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.641980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.642000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:116272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.642011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.642022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.642031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.642155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.642305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.642419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.642434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.642445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.642454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.642597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.642849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.642866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.642875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.642885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.642895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.642905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.642914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.643030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.643049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.643062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.643195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.643436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.643456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.643467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.643492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.643743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.643761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.643772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.643781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.643792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.643801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.643811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:56824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.643899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.643931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.643939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.643949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.643957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.644070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.644081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.644099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:119552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.644227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.644240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:119424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.644481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.644511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.644522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.644535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.644544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.644555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.018 [2024-11-20 22:44:50.644564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.018 [2024-11-20 22:44:50.644831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.644853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.644864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.644873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.645130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.645142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.645153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.645306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.645512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.645524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.645536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.645546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.645557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.645566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.645577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.645586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.645597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.645829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.645852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.645861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.645871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.645880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.645891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.645899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.645910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.646011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.646034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.646044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.646055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.646192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.646324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.646450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.646472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.646481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.646753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:89160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.646766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.646776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.646784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.646794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.646802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.646812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.646909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.646922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.646930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.646941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.646949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.647060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.647077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.647088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:116280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.647178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.647193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.647202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.647212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.647221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.647231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.647240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.647483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.647505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.647517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:54080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.647526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.647538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.647547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.647558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.647567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.647673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.647690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.647813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.647833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.647844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.647923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.647946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.647956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.647967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.647975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.647985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:91520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.648237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.648260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.648341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.648356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.019 [2024-11-20 22:44:50.648365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.019 [2024-11-20 22:44:50.648375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.020 [2024-11-20 22:44:50.648383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.020 [2024-11-20 22:44:50.648394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.020 [2024-11-20 22:44:50.648402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.020 [2024-11-20 22:44:50.648637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.020 [2024-11-20 22:44:50.648650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.020 [2024-11-20 22:44:50.648662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.020 [2024-11-20 22:44:50.648671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.020 [2024-11-20 22:44:50.648681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.020 [2024-11-20 22:44:50.648774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.020 [2024-11-20 22:44:50.648787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.020 [2024-11-20 22:44:50.648796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.020 [2024-11-20 22:44:50.649046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.020 [2024-11-20 22:44:50.649066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.020 [2024-11-20 22:44:50.649075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123960 len:8 PRP1 0x0 PRP2 0x0 00:24:50.020 [2024-11-20 22:44:50.649085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.020 [2024-11-20 22:44:50.649254] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17e0d10 was disconnected and freed. reset controller. 00:24:50.020 [2024-11-20 22:44:50.649644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.020 [2024-11-20 22:44:50.649675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.020 [2024-11-20 22:44:50.649686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.020 [2024-11-20 22:44:50.649710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.020 [2024-11-20 22:44:50.649719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.020 [2024-11-20 22:44:50.649727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.020 [2024-11-20 22:44:50.649736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.020 [2024-11-20 22:44:50.649744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.020 [2024-11-20 22:44:50.649882] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af0b0 is same with the state(5) to be set 00:24:50.020 [2024-11-20 22:44:50.650415] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.020 [2024-11-20 22:44:50.650451] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af0b0 (9): Bad file descriptor 00:24:50.020 [2024-11-20 22:44:50.650753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.020 [2024-11-20 22:44:50.650837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.020 [2024-11-20 22:44:50.650854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af0b0 with addr=10.0.0.2, port=4420 00:24:50.020 [2024-11-20 22:44:50.650948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af0b0 is same with the state(5) to be set 00:24:50.020 [2024-11-20 22:44:50.650978] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af0b0 (9): Bad file descriptor 00:24:50.020 [2024-11-20 22:44:50.651107] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.020 [2024-11-20 22:44:50.651130] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.020 [2024-11-20 22:44:50.651219] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.020 [2024-11-20 22:44:50.651243] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.020 [2024-11-20 22:44:50.651254] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.020 22:44:50 -- host/timeout.sh@128 -- # wait 100826 00:24:51.925 [2024-11-20 22:44:52.651371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.925 [2024-11-20 22:44:52.651455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.925 [2024-11-20 22:44:52.651472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af0b0 with addr=10.0.0.2, port=4420 00:24:51.925 [2024-11-20 22:44:52.651482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af0b0 is same with the state(5) to be set 00:24:51.925 [2024-11-20 22:44:52.651501] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af0b0 (9): Bad file descriptor 00:24:51.925 [2024-11-20 22:44:52.651516] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.925 [2024-11-20 22:44:52.651525] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.925 [2024-11-20 22:44:52.651534] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.925 [2024-11-20 22:44:52.651554] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.925 [2024-11-20 22:44:52.651563] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.458 [2024-11-20 22:44:54.651664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.458 [2024-11-20 22:44:54.651758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.458 [2024-11-20 22:44:54.651776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af0b0 with addr=10.0.0.2, port=4420 00:24:54.458 [2024-11-20 22:44:54.651786] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af0b0 is same with the state(5) to be set 00:24:54.458 [2024-11-20 22:44:54.651804] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af0b0 (9): Bad file descriptor 00:24:54.458 [2024-11-20 22:44:54.651818] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.458 [2024-11-20 22:44:54.651827] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.458 [2024-11-20 22:44:54.651834] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.458 [2024-11-20 22:44:54.651853] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.458 [2024-11-20 22:44:54.651862] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.361 [2024-11-20 22:44:56.651898] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.361 [2024-11-20 22:44:56.651926] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.361 [2024-11-20 22:44:56.651951] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.361 [2024-11-20 22:44:56.651960] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:56.361 [2024-11-20 22:44:56.651979] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.930 00:24:56.930 Latency(us) 00:24:56.930 [2024-11-20T22:44:57.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.930 [2024-11-20T22:44:57.664Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:24:56.930 NVMe0n1 : 8.14 3324.70 12.99 15.72 0.00 38344.19 2740.60 7046430.72 00:24:56.930 [2024-11-20T22:44:57.664Z] =================================================================================================================== 00:24:56.930 [2024-11-20T22:44:57.664Z] Total : 3324.70 12.99 15.72 0.00 38344.19 2740.60 7046430.72 00:24:56.930 0 00:24:57.190 22:44:57 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:57.190 Attaching 5 probes... 00:24:57.190 1294.461265: reset bdev controller NVMe0 00:24:57.190 1294.551920: reconnect bdev controller NVMe0 00:24:57.190 3295.326758: reconnect delay bdev controller NVMe0 00:24:57.190 3295.362939: reconnect bdev controller NVMe0 00:24:57.190 5295.635825: reconnect delay bdev controller NVMe0 00:24:57.190 5295.663931: reconnect bdev controller NVMe0 00:24:57.190 7295.933684: reconnect delay bdev controller NVMe0 00:24:57.190 7295.947521: reconnect bdev controller NVMe0 00:24:57.190 22:44:57 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:24:57.190 22:44:57 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:24:57.190 22:44:57 -- host/timeout.sh@136 -- # kill 100777 00:24:57.190 22:44:57 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:57.190 22:44:57 -- host/timeout.sh@139 -- # killprocess 100749 00:24:57.190 22:44:57 -- common/autotest_common.sh@936 -- # '[' -z 100749 ']' 00:24:57.190 22:44:57 -- common/autotest_common.sh@940 -- # kill -0 100749 00:24:57.190 22:44:57 -- common/autotest_common.sh@941 -- # uname 00:24:57.190 22:44:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:57.190 22:44:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100749 00:24:57.190 killing process with pid 100749 00:24:57.190 Received shutdown signal, test time was about 8.208026 seconds 00:24:57.190 00:24:57.190 Latency(us) 00:24:57.190 [2024-11-20T22:44:57.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.190 [2024-11-20T22:44:57.924Z] =================================================================================================================== 00:24:57.190 [2024-11-20T22:44:57.924Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:57.190 22:44:57 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:57.190 22:44:57 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:57.190 22:44:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100749' 00:24:57.190 22:44:57 -- common/autotest_common.sh@955 -- # kill 100749 00:24:57.190 22:44:57 -- common/autotest_common.sh@960 -- # wait 100749 00:24:57.190 22:44:57 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:57.451 22:44:58 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:24:57.451 22:44:58 -- host/timeout.sh@145 -- # nvmftestfini 00:24:57.451 22:44:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:57.451 22:44:58 -- nvmf/common.sh@116 -- # sync 00:24:57.710 22:44:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:57.710 22:44:58 -- nvmf/common.sh@119 -- # set +e 00:24:57.710 22:44:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:57.710 22:44:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:57.710 rmmod nvme_tcp 00:24:57.710 rmmod nvme_fabrics 00:24:57.710 rmmod nvme_keyring 00:24:57.710 22:44:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:57.710 22:44:58 -- nvmf/common.sh@123 -- # set -e 00:24:57.710 22:44:58 -- nvmf/common.sh@124 -- # return 0 00:24:57.710 22:44:58 -- nvmf/common.sh@477 -- # '[' -n 100159 ']' 00:24:57.710 22:44:58 -- nvmf/common.sh@478 -- # killprocess 100159 00:24:57.710 22:44:58 -- common/autotest_common.sh@936 -- # '[' -z 100159 ']' 00:24:57.710 22:44:58 -- common/autotest_common.sh@940 -- # kill -0 100159 00:24:57.710 22:44:58 -- common/autotest_common.sh@941 -- # uname 00:24:57.710 22:44:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:57.710 22:44:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100159 00:24:57.710 killing process with pid 100159 00:24:57.710 22:44:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:57.710 22:44:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:57.710 22:44:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100159' 00:24:57.710 22:44:58 -- common/autotest_common.sh@955 -- # kill 100159 00:24:57.710 22:44:58 -- common/autotest_common.sh@960 -- # wait 100159 00:24:57.969 22:44:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:57.969 22:44:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:57.969 22:44:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:57.969 22:44:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:57.969 22:44:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:57.969 22:44:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.969 22:44:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:57.969 22:44:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.969 22:44:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:57.969 00:24:57.969 real 0m46.859s 00:24:57.969 user 2m16.374s 00:24:57.969 sys 0m5.647s 00:24:57.969 22:44:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:57.969 ************************************ 00:24:57.969 END TEST nvmf_timeout 00:24:57.969 ************************************ 00:24:57.969 22:44:58 -- common/autotest_common.sh@10 -- # set +x 00:24:57.969 22:44:58 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:24:57.969 22:44:58 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:24:57.969 22:44:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:57.969 22:44:58 -- common/autotest_common.sh@10 -- # set +x 00:24:58.229 22:44:58 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:24:58.229 00:24:58.229 real 17m31.822s 00:24:58.229 user 55m50.762s 00:24:58.229 sys 3m40.812s 00:24:58.229 22:44:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:58.229 22:44:58 -- common/autotest_common.sh@10 -- # set +x 00:24:58.229 ************************************ 00:24:58.229 END TEST nvmf_tcp 00:24:58.229 ************************************ 00:24:58.229 22:44:58 -- spdk/autotest.sh@283 -- # [[ 0 -eq 0 ]] 00:24:58.229 22:44:58 -- spdk/autotest.sh@284 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:58.229 22:44:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:58.229 22:44:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:58.229 22:44:58 -- common/autotest_common.sh@10 -- # set +x 00:24:58.229 ************************************ 00:24:58.229 START TEST spdkcli_nvmf_tcp 00:24:58.229 ************************************ 00:24:58.229 22:44:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:58.229 * Looking for test storage... 00:24:58.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:58.229 22:44:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:58.229 22:44:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:58.229 22:44:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:58.229 22:44:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:58.229 22:44:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:58.229 22:44:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:58.229 22:44:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:58.229 22:44:58 -- scripts/common.sh@335 -- # IFS=.-: 00:24:58.229 22:44:58 -- scripts/common.sh@335 -- # read -ra ver1 00:24:58.229 22:44:58 -- scripts/common.sh@336 -- # IFS=.-: 00:24:58.229 22:44:58 -- scripts/common.sh@336 -- # read -ra ver2 00:24:58.229 22:44:58 -- scripts/common.sh@337 -- # local 'op=<' 00:24:58.229 22:44:58 -- scripts/common.sh@339 -- # ver1_l=2 00:24:58.229 22:44:58 -- scripts/common.sh@340 -- # ver2_l=1 00:24:58.230 22:44:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:58.230 22:44:58 -- scripts/common.sh@343 -- # case "$op" in 00:24:58.230 22:44:58 -- scripts/common.sh@344 -- # : 1 00:24:58.230 22:44:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:58.230 22:44:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:58.230 22:44:58 -- scripts/common.sh@364 -- # decimal 1 00:24:58.230 22:44:58 -- scripts/common.sh@352 -- # local d=1 00:24:58.230 22:44:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:58.230 22:44:58 -- scripts/common.sh@354 -- # echo 1 00:24:58.230 22:44:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:58.230 22:44:58 -- scripts/common.sh@365 -- # decimal 2 00:24:58.230 22:44:58 -- scripts/common.sh@352 -- # local d=2 00:24:58.230 22:44:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:58.230 22:44:58 -- scripts/common.sh@354 -- # echo 2 00:24:58.230 22:44:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:58.230 22:44:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:58.230 22:44:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:58.230 22:44:58 -- scripts/common.sh@367 -- # return 0 00:24:58.230 22:44:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:58.230 22:44:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:58.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.230 --rc genhtml_branch_coverage=1 00:24:58.230 --rc genhtml_function_coverage=1 00:24:58.230 --rc genhtml_legend=1 00:24:58.230 --rc geninfo_all_blocks=1 00:24:58.230 --rc geninfo_unexecuted_blocks=1 00:24:58.230 00:24:58.230 ' 00:24:58.230 22:44:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:58.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.230 --rc genhtml_branch_coverage=1 00:24:58.230 --rc genhtml_function_coverage=1 00:24:58.230 --rc genhtml_legend=1 00:24:58.230 --rc geninfo_all_blocks=1 00:24:58.230 --rc geninfo_unexecuted_blocks=1 00:24:58.230 00:24:58.230 ' 00:24:58.230 22:44:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:58.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.230 --rc genhtml_branch_coverage=1 00:24:58.230 --rc genhtml_function_coverage=1 00:24:58.230 --rc genhtml_legend=1 00:24:58.230 --rc geninfo_all_blocks=1 00:24:58.230 --rc geninfo_unexecuted_blocks=1 00:24:58.230 00:24:58.230 ' 00:24:58.230 22:44:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:58.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.230 --rc genhtml_branch_coverage=1 00:24:58.230 --rc genhtml_function_coverage=1 00:24:58.230 --rc genhtml_legend=1 00:24:58.230 --rc geninfo_all_blocks=1 00:24:58.230 --rc geninfo_unexecuted_blocks=1 00:24:58.230 00:24:58.230 ' 00:24:58.230 22:44:58 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:24:58.230 22:44:58 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:24:58.230 22:44:58 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:24:58.230 22:44:58 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:58.230 22:44:58 -- nvmf/common.sh@7 -- # uname -s 00:24:58.230 22:44:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:58.230 22:44:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:58.230 22:44:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:58.230 22:44:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:58.230 22:44:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:58.230 22:44:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:58.230 22:44:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:58.230 22:44:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:58.230 22:44:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:58.230 22:44:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:58.230 22:44:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:24:58.230 22:44:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:24:58.230 22:44:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:58.230 22:44:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:58.230 22:44:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:58.230 22:44:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:58.230 22:44:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:58.230 22:44:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:58.230 22:44:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:58.230 22:44:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.230 22:44:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.230 22:44:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.230 22:44:58 -- paths/export.sh@5 -- # export PATH 00:24:58.230 22:44:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.230 22:44:58 -- nvmf/common.sh@46 -- # : 0 00:24:58.230 22:44:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:58.230 22:44:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:58.230 22:44:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:58.230 22:44:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:58.230 22:44:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:58.230 22:44:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:58.230 22:44:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:58.230 22:44:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:58.230 22:44:58 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:58.230 22:44:58 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:58.230 22:44:58 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:58.230 22:44:58 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:58.230 22:44:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:58.230 22:44:58 -- common/autotest_common.sh@10 -- # set +x 00:24:58.230 22:44:58 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:58.230 22:44:58 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=101056 00:24:58.230 22:44:58 -- spdkcli/common.sh@34 -- # waitforlisten 101056 00:24:58.230 22:44:58 -- common/autotest_common.sh@829 -- # '[' -z 101056 ']' 00:24:58.230 22:44:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.230 22:44:58 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:58.230 22:44:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:58.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.230 22:44:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.230 22:44:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:58.230 22:44:58 -- common/autotest_common.sh@10 -- # set +x 00:24:58.489 [2024-11-20 22:44:59.005194] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:58.489 [2024-11-20 22:44:59.005297] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101056 ] 00:24:58.489 [2024-11-20 22:44:59.134122] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:58.489 [2024-11-20 22:44:59.205387] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:58.489 [2024-11-20 22:44:59.205702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.489 [2024-11-20 22:44:59.205709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.425 22:44:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:59.425 22:44:59 -- common/autotest_common.sh@862 -- # return 0 00:24:59.425 22:44:59 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:59.425 22:44:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:59.425 22:44:59 -- common/autotest_common.sh@10 -- # set +x 00:24:59.425 22:44:59 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:59.425 22:44:59 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:24:59.425 22:44:59 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:59.425 22:44:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:59.425 22:44:59 -- common/autotest_common.sh@10 -- # set +x 00:24:59.425 22:44:59 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:59.425 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:59.425 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:59.425 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:59.425 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:59.425 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:59.425 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:59.425 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:59.425 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:59.425 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:59.425 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:59.425 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:59.425 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:59.425 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:59.425 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:59.425 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:59.425 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:59.425 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:59.425 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:59.425 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:59.425 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:59.425 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:59.425 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:59.425 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:24:59.425 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:59.425 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:59.425 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:59.425 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:59.425 ' 00:24:59.991 [2024-11-20 22:45:00.464766] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:02.525 [2024-11-20 22:45:02.748045] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.460 [2024-11-20 22:45:04.033098] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:05.990 [2024-11-20 22:45:06.418409] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:07.958 [2024-11-20 22:45:08.463719] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:09.332 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:09.332 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:09.332 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:09.332 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:09.332 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:09.332 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:09.332 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:09.332 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:09.332 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:09.332 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:09.332 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:09.332 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:09.332 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:09.332 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:09.332 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:09.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:09.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:09.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:09.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:09.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:09.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:09.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:09.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:09.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:09.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:09.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:09.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:09.333 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:09.591 22:45:10 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:09.591 22:45:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:09.591 22:45:10 -- common/autotest_common.sh@10 -- # set +x 00:25:09.591 22:45:10 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:09.591 22:45:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:09.591 22:45:10 -- common/autotest_common.sh@10 -- # set +x 00:25:09.591 22:45:10 -- spdkcli/nvmf.sh@69 -- # check_match 00:25:09.591 22:45:10 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:25:10.159 22:45:10 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:10.159 22:45:10 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:10.159 22:45:10 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:10.159 22:45:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:10.159 22:45:10 -- common/autotest_common.sh@10 -- # set +x 00:25:10.159 22:45:10 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:10.159 22:45:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:10.159 22:45:10 -- common/autotest_common.sh@10 -- # set +x 00:25:10.159 22:45:10 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:10.159 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:10.159 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:10.159 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:10.159 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:10.159 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:10.159 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:10.159 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:10.159 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:10.159 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:10.159 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:10.159 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:10.159 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:10.159 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:10.159 ' 00:25:15.431 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:15.431 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:15.431 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:15.431 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:15.431 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:15.431 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:15.431 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:15.431 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:15.431 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:15.431 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:15.431 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:15.431 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:15.431 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:15.431 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:15.690 22:45:16 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:15.690 22:45:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:15.690 22:45:16 -- common/autotest_common.sh@10 -- # set +x 00:25:15.690 22:45:16 -- spdkcli/nvmf.sh@90 -- # killprocess 101056 00:25:15.690 22:45:16 -- common/autotest_common.sh@936 -- # '[' -z 101056 ']' 00:25:15.690 22:45:16 -- common/autotest_common.sh@940 -- # kill -0 101056 00:25:15.690 22:45:16 -- common/autotest_common.sh@941 -- # uname 00:25:15.690 22:45:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:15.690 22:45:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101056 00:25:15.690 22:45:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:15.690 22:45:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:15.690 killing process with pid 101056 00:25:15.690 22:45:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101056' 00:25:15.690 22:45:16 -- common/autotest_common.sh@955 -- # kill 101056 00:25:15.690 [2024-11-20 22:45:16.277142] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:15.690 22:45:16 -- common/autotest_common.sh@960 -- # wait 101056 00:25:15.949 22:45:16 -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:15.949 22:45:16 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:15.949 22:45:16 -- spdkcli/common.sh@13 -- # '[' -n 101056 ']' 00:25:15.949 22:45:16 -- spdkcli/common.sh@14 -- # killprocess 101056 00:25:15.949 22:45:16 -- common/autotest_common.sh@936 -- # '[' -z 101056 ']' 00:25:15.949 22:45:16 -- common/autotest_common.sh@940 -- # kill -0 101056 00:25:15.949 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (101056) - No such process 00:25:15.949 Process with pid 101056 is not found 00:25:15.949 22:45:16 -- common/autotest_common.sh@963 -- # echo 'Process with pid 101056 is not found' 00:25:15.949 22:45:16 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:15.949 22:45:16 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:15.949 22:45:16 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:15.949 00:25:15.949 real 0m17.709s 00:25:15.949 user 0m38.410s 00:25:15.949 sys 0m0.907s 00:25:15.949 ************************************ 00:25:15.949 END TEST spdkcli_nvmf_tcp 00:25:15.949 ************************************ 00:25:15.949 22:45:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:15.949 22:45:16 -- common/autotest_common.sh@10 -- # set +x 00:25:15.949 22:45:16 -- spdk/autotest.sh@285 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:15.949 22:45:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:15.949 22:45:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:15.949 22:45:16 -- common/autotest_common.sh@10 -- # set +x 00:25:15.949 ************************************ 00:25:15.949 START TEST nvmf_identify_passthru 00:25:15.949 ************************************ 00:25:15.949 22:45:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:15.949 * Looking for test storage... 00:25:15.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:15.949 22:45:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:15.949 22:45:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:15.949 22:45:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:16.208 22:45:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:16.208 22:45:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:16.208 22:45:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:16.208 22:45:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:16.208 22:45:16 -- scripts/common.sh@335 -- # IFS=.-: 00:25:16.208 22:45:16 -- scripts/common.sh@335 -- # read -ra ver1 00:25:16.208 22:45:16 -- scripts/common.sh@336 -- # IFS=.-: 00:25:16.208 22:45:16 -- scripts/common.sh@336 -- # read -ra ver2 00:25:16.208 22:45:16 -- scripts/common.sh@337 -- # local 'op=<' 00:25:16.208 22:45:16 -- scripts/common.sh@339 -- # ver1_l=2 00:25:16.208 22:45:16 -- scripts/common.sh@340 -- # ver2_l=1 00:25:16.208 22:45:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:16.208 22:45:16 -- scripts/common.sh@343 -- # case "$op" in 00:25:16.208 22:45:16 -- scripts/common.sh@344 -- # : 1 00:25:16.208 22:45:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:16.208 22:45:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:16.208 22:45:16 -- scripts/common.sh@364 -- # decimal 1 00:25:16.208 22:45:16 -- scripts/common.sh@352 -- # local d=1 00:25:16.208 22:45:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:16.208 22:45:16 -- scripts/common.sh@354 -- # echo 1 00:25:16.208 22:45:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:16.208 22:45:16 -- scripts/common.sh@365 -- # decimal 2 00:25:16.208 22:45:16 -- scripts/common.sh@352 -- # local d=2 00:25:16.208 22:45:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:16.208 22:45:16 -- scripts/common.sh@354 -- # echo 2 00:25:16.208 22:45:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:16.208 22:45:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:16.208 22:45:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:16.208 22:45:16 -- scripts/common.sh@367 -- # return 0 00:25:16.208 22:45:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:16.208 22:45:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:16.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.208 --rc genhtml_branch_coverage=1 00:25:16.208 --rc genhtml_function_coverage=1 00:25:16.208 --rc genhtml_legend=1 00:25:16.208 --rc geninfo_all_blocks=1 00:25:16.208 --rc geninfo_unexecuted_blocks=1 00:25:16.208 00:25:16.208 ' 00:25:16.209 22:45:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:16.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.209 --rc genhtml_branch_coverage=1 00:25:16.209 --rc genhtml_function_coverage=1 00:25:16.209 --rc genhtml_legend=1 00:25:16.209 --rc geninfo_all_blocks=1 00:25:16.209 --rc geninfo_unexecuted_blocks=1 00:25:16.209 00:25:16.209 ' 00:25:16.209 22:45:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:16.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.209 --rc genhtml_branch_coverage=1 00:25:16.209 --rc genhtml_function_coverage=1 00:25:16.209 --rc genhtml_legend=1 00:25:16.209 --rc geninfo_all_blocks=1 00:25:16.209 --rc geninfo_unexecuted_blocks=1 00:25:16.209 00:25:16.209 ' 00:25:16.209 22:45:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:16.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.209 --rc genhtml_branch_coverage=1 00:25:16.209 --rc genhtml_function_coverage=1 00:25:16.209 --rc genhtml_legend=1 00:25:16.209 --rc geninfo_all_blocks=1 00:25:16.209 --rc geninfo_unexecuted_blocks=1 00:25:16.209 00:25:16.209 ' 00:25:16.209 22:45:16 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:16.209 22:45:16 -- nvmf/common.sh@7 -- # uname -s 00:25:16.209 22:45:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:16.209 22:45:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:16.209 22:45:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:16.209 22:45:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:16.209 22:45:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:16.209 22:45:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:16.209 22:45:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:16.209 22:45:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:16.209 22:45:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:16.209 22:45:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:16.209 22:45:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:25:16.209 22:45:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:25:16.209 22:45:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:16.209 22:45:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:16.209 22:45:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:16.209 22:45:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:16.209 22:45:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:16.209 22:45:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:16.209 22:45:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:16.209 22:45:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.209 22:45:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.209 22:45:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.209 22:45:16 -- paths/export.sh@5 -- # export PATH 00:25:16.209 22:45:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.209 22:45:16 -- nvmf/common.sh@46 -- # : 0 00:25:16.209 22:45:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:16.209 22:45:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:16.209 22:45:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:16.209 22:45:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:16.209 22:45:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:16.209 22:45:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:16.209 22:45:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:16.209 22:45:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:16.209 22:45:16 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:16.209 22:45:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:16.209 22:45:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:16.209 22:45:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:16.209 22:45:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.209 22:45:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.209 22:45:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.209 22:45:16 -- paths/export.sh@5 -- # export PATH 00:25:16.209 22:45:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.209 22:45:16 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:16.209 22:45:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:16.209 22:45:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:16.209 22:45:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:16.209 22:45:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:16.209 22:45:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:16.209 22:45:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.209 22:45:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:16.209 22:45:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.209 22:45:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:16.209 22:45:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:16.209 22:45:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:16.209 22:45:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:16.209 22:45:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:16.209 22:45:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:16.209 22:45:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:16.209 22:45:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:16.209 22:45:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:16.209 22:45:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:16.209 22:45:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:16.209 22:45:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:16.209 22:45:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:16.209 22:45:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:16.209 22:45:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:16.209 22:45:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:16.209 22:45:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:16.209 22:45:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:16.209 22:45:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:16.209 22:45:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:16.209 Cannot find device "nvmf_tgt_br" 00:25:16.209 22:45:16 -- nvmf/common.sh@154 -- # true 00:25:16.209 22:45:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:16.209 Cannot find device "nvmf_tgt_br2" 00:25:16.209 22:45:16 -- nvmf/common.sh@155 -- # true 00:25:16.209 22:45:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:16.209 22:45:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:16.209 Cannot find device "nvmf_tgt_br" 00:25:16.209 22:45:16 -- nvmf/common.sh@157 -- # true 00:25:16.209 22:45:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:16.209 Cannot find device "nvmf_tgt_br2" 00:25:16.209 22:45:16 -- nvmf/common.sh@158 -- # true 00:25:16.209 22:45:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:16.209 22:45:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:16.209 22:45:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:16.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:16.209 22:45:16 -- nvmf/common.sh@161 -- # true 00:25:16.209 22:45:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:16.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:16.209 22:45:16 -- nvmf/common.sh@162 -- # true 00:25:16.209 22:45:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:16.209 22:45:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:16.209 22:45:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:16.209 22:45:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:16.209 22:45:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:16.468 22:45:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:16.468 22:45:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:16.468 22:45:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:16.468 22:45:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:16.468 22:45:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:16.469 22:45:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:16.469 22:45:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:16.469 22:45:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:16.469 22:45:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:16.469 22:45:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:16.469 22:45:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:16.469 22:45:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:16.469 22:45:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:16.469 22:45:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:16.469 22:45:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:16.469 22:45:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:16.469 22:45:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:16.469 22:45:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:16.469 22:45:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:16.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:16.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:25:16.469 00:25:16.469 --- 10.0.0.2 ping statistics --- 00:25:16.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.469 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:25:16.469 22:45:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:16.469 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:16.469 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:25:16.469 00:25:16.469 --- 10.0.0.3 ping statistics --- 00:25:16.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.469 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:25:16.469 22:45:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:16.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:16.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:25:16.469 00:25:16.469 --- 10.0.0.1 ping statistics --- 00:25:16.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.469 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:25:16.469 22:45:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:16.469 22:45:17 -- nvmf/common.sh@421 -- # return 0 00:25:16.469 22:45:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:16.469 22:45:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:16.469 22:45:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:16.469 22:45:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:16.469 22:45:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:16.469 22:45:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:16.469 22:45:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:16.469 22:45:17 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:16.469 22:45:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:16.469 22:45:17 -- common/autotest_common.sh@10 -- # set +x 00:25:16.469 22:45:17 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:16.469 22:45:17 -- common/autotest_common.sh@1519 -- # bdfs=() 00:25:16.469 22:45:17 -- common/autotest_common.sh@1519 -- # local bdfs 00:25:16.469 22:45:17 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:25:16.469 22:45:17 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:25:16.469 22:45:17 -- common/autotest_common.sh@1508 -- # bdfs=() 00:25:16.469 22:45:17 -- common/autotest_common.sh@1508 -- # local bdfs 00:25:16.469 22:45:17 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:16.469 22:45:17 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:16.469 22:45:17 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:25:16.469 22:45:17 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:25:16.469 22:45:17 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:16.469 22:45:17 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:25:16.469 22:45:17 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:25:16.469 22:45:17 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:25:16.469 22:45:17 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:16.469 22:45:17 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:16.469 22:45:17 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:16.728 22:45:17 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:25:16.728 22:45:17 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:16.728 22:45:17 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:16.728 22:45:17 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:16.986 22:45:17 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:25:16.986 22:45:17 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:16.986 22:45:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:16.986 22:45:17 -- common/autotest_common.sh@10 -- # set +x 00:25:16.986 22:45:17 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:16.986 22:45:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:16.986 22:45:17 -- common/autotest_common.sh@10 -- # set +x 00:25:16.986 22:45:17 -- target/identify_passthru.sh@31 -- # nvmfpid=101561 00:25:16.986 22:45:17 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:16.986 22:45:17 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:16.986 22:45:17 -- target/identify_passthru.sh@35 -- # waitforlisten 101561 00:25:16.986 22:45:17 -- common/autotest_common.sh@829 -- # '[' -z 101561 ']' 00:25:16.986 22:45:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.986 22:45:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:16.986 22:45:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.986 22:45:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:16.986 22:45:17 -- common/autotest_common.sh@10 -- # set +x 00:25:16.986 [2024-11-20 22:45:17.619163] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:16.986 [2024-11-20 22:45:17.619257] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.244 [2024-11-20 22:45:17.749004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:17.244 [2024-11-20 22:45:17.809846] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:17.245 [2024-11-20 22:45:17.810015] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.245 [2024-11-20 22:45:17.810044] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.245 [2024-11-20 22:45:17.810053] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.245 [2024-11-20 22:45:17.810663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.245 [2024-11-20 22:45:17.811642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:17.245 [2024-11-20 22:45:17.811844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:17.245 [2024-11-20 22:45:17.811849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.245 22:45:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:17.245 22:45:17 -- common/autotest_common.sh@862 -- # return 0 00:25:17.245 22:45:17 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:17.245 22:45:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.245 22:45:17 -- common/autotest_common.sh@10 -- # set +x 00:25:17.245 22:45:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.245 22:45:17 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:17.245 22:45:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.245 22:45:17 -- common/autotest_common.sh@10 -- # set +x 00:25:17.504 [2024-11-20 22:45:17.979836] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:17.504 22:45:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.504 22:45:17 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:17.504 22:45:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.504 22:45:17 -- common/autotest_common.sh@10 -- # set +x 00:25:17.504 [2024-11-20 22:45:17.993783] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.504 22:45:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.504 22:45:18 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:17.504 22:45:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:17.504 22:45:18 -- common/autotest_common.sh@10 -- # set +x 00:25:17.504 22:45:18 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:25:17.504 22:45:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.504 22:45:18 -- common/autotest_common.sh@10 -- # set +x 00:25:17.504 Nvme0n1 00:25:17.504 22:45:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.504 22:45:18 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:17.504 22:45:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.504 22:45:18 -- common/autotest_common.sh@10 -- # set +x 00:25:17.504 22:45:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.504 22:45:18 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:17.504 22:45:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.504 22:45:18 -- common/autotest_common.sh@10 -- # set +x 00:25:17.504 22:45:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.504 22:45:18 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:17.504 22:45:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.504 22:45:18 -- common/autotest_common.sh@10 -- # set +x 00:25:17.504 [2024-11-20 22:45:18.134989] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:17.504 22:45:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.504 22:45:18 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:17.504 22:45:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.504 22:45:18 -- common/autotest_common.sh@10 -- # set +x 00:25:17.504 [2024-11-20 22:45:18.142752] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:17.504 [ 00:25:17.504 { 00:25:17.504 "allow_any_host": true, 00:25:17.504 "hosts": [], 00:25:17.504 "listen_addresses": [], 00:25:17.504 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:17.504 "subtype": "Discovery" 00:25:17.504 }, 00:25:17.504 { 00:25:17.504 "allow_any_host": true, 00:25:17.504 "hosts": [], 00:25:17.504 "listen_addresses": [ 00:25:17.504 { 00:25:17.504 "adrfam": "IPv4", 00:25:17.504 "traddr": "10.0.0.2", 00:25:17.504 "transport": "TCP", 00:25:17.504 "trsvcid": "4420", 00:25:17.504 "trtype": "TCP" 00:25:17.504 } 00:25:17.504 ], 00:25:17.504 "max_cntlid": 65519, 00:25:17.504 "max_namespaces": 1, 00:25:17.504 "min_cntlid": 1, 00:25:17.504 "model_number": "SPDK bdev Controller", 00:25:17.504 "namespaces": [ 00:25:17.504 { 00:25:17.504 "bdev_name": "Nvme0n1", 00:25:17.504 "name": "Nvme0n1", 00:25:17.504 "nguid": "3971C5E2ACCF438C92E26F97E859D940", 00:25:17.504 "nsid": 1, 00:25:17.504 "uuid": "3971c5e2-accf-438c-92e2-6f97e859d940" 00:25:17.504 } 00:25:17.504 ], 00:25:17.504 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:17.504 "serial_number": "SPDK00000000000001", 00:25:17.504 "subtype": "NVMe" 00:25:17.504 } 00:25:17.504 ] 00:25:17.504 22:45:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.504 22:45:18 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:17.504 22:45:18 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:17.504 22:45:18 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:17.763 22:45:18 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:25:17.763 22:45:18 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:17.763 22:45:18 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:17.763 22:45:18 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:18.022 22:45:18 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:25:18.023 22:45:18 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:25:18.023 22:45:18 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:25:18.023 22:45:18 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:18.023 22:45:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.023 22:45:18 -- common/autotest_common.sh@10 -- # set +x 00:25:18.023 22:45:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.023 22:45:18 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:18.023 22:45:18 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:18.023 22:45:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:18.023 22:45:18 -- nvmf/common.sh@116 -- # sync 00:25:18.023 22:45:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:18.023 22:45:18 -- nvmf/common.sh@119 -- # set +e 00:25:18.023 22:45:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:18.023 22:45:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:18.023 rmmod nvme_tcp 00:25:18.023 rmmod nvme_fabrics 00:25:18.023 rmmod nvme_keyring 00:25:18.023 22:45:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:18.023 22:45:18 -- nvmf/common.sh@123 -- # set -e 00:25:18.023 22:45:18 -- nvmf/common.sh@124 -- # return 0 00:25:18.023 22:45:18 -- nvmf/common.sh@477 -- # '[' -n 101561 ']' 00:25:18.023 22:45:18 -- nvmf/common.sh@478 -- # killprocess 101561 00:25:18.023 22:45:18 -- common/autotest_common.sh@936 -- # '[' -z 101561 ']' 00:25:18.023 22:45:18 -- common/autotest_common.sh@940 -- # kill -0 101561 00:25:18.023 22:45:18 -- common/autotest_common.sh@941 -- # uname 00:25:18.023 22:45:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:18.023 22:45:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101561 00:25:18.281 22:45:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:18.281 22:45:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:18.281 killing process with pid 101561 00:25:18.281 22:45:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101561' 00:25:18.281 22:45:18 -- common/autotest_common.sh@955 -- # kill 101561 00:25:18.281 [2024-11-20 22:45:18.761630] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:18.281 22:45:18 -- common/autotest_common.sh@960 -- # wait 101561 00:25:18.540 22:45:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:18.540 22:45:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:18.540 22:45:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:18.540 22:45:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:18.540 22:45:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:18.540 22:45:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.540 22:45:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:18.540 22:45:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.540 22:45:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:18.540 00:25:18.540 real 0m2.553s 00:25:18.540 user 0m5.028s 00:25:18.540 sys 0m0.788s 00:25:18.540 22:45:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:18.540 ************************************ 00:25:18.540 END TEST nvmf_identify_passthru 00:25:18.540 ************************************ 00:25:18.540 22:45:19 -- common/autotest_common.sh@10 -- # set +x 00:25:18.540 22:45:19 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:18.540 22:45:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:18.540 22:45:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:18.540 22:45:19 -- common/autotest_common.sh@10 -- # set +x 00:25:18.540 ************************************ 00:25:18.540 START TEST nvmf_dif 00:25:18.540 ************************************ 00:25:18.540 22:45:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:18.540 * Looking for test storage... 00:25:18.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:18.540 22:45:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:18.540 22:45:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:18.540 22:45:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:18.798 22:45:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:18.798 22:45:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:18.798 22:45:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:18.798 22:45:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:18.798 22:45:19 -- scripts/common.sh@335 -- # IFS=.-: 00:25:18.798 22:45:19 -- scripts/common.sh@335 -- # read -ra ver1 00:25:18.798 22:45:19 -- scripts/common.sh@336 -- # IFS=.-: 00:25:18.798 22:45:19 -- scripts/common.sh@336 -- # read -ra ver2 00:25:18.798 22:45:19 -- scripts/common.sh@337 -- # local 'op=<' 00:25:18.798 22:45:19 -- scripts/common.sh@339 -- # ver1_l=2 00:25:18.798 22:45:19 -- scripts/common.sh@340 -- # ver2_l=1 00:25:18.798 22:45:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:18.798 22:45:19 -- scripts/common.sh@343 -- # case "$op" in 00:25:18.798 22:45:19 -- scripts/common.sh@344 -- # : 1 00:25:18.798 22:45:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:18.798 22:45:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:18.798 22:45:19 -- scripts/common.sh@364 -- # decimal 1 00:25:18.798 22:45:19 -- scripts/common.sh@352 -- # local d=1 00:25:18.798 22:45:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:18.798 22:45:19 -- scripts/common.sh@354 -- # echo 1 00:25:18.798 22:45:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:18.798 22:45:19 -- scripts/common.sh@365 -- # decimal 2 00:25:18.798 22:45:19 -- scripts/common.sh@352 -- # local d=2 00:25:18.798 22:45:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:18.798 22:45:19 -- scripts/common.sh@354 -- # echo 2 00:25:18.798 22:45:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:18.798 22:45:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:18.798 22:45:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:18.798 22:45:19 -- scripts/common.sh@367 -- # return 0 00:25:18.798 22:45:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:18.798 22:45:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:18.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.798 --rc genhtml_branch_coverage=1 00:25:18.798 --rc genhtml_function_coverage=1 00:25:18.798 --rc genhtml_legend=1 00:25:18.798 --rc geninfo_all_blocks=1 00:25:18.798 --rc geninfo_unexecuted_blocks=1 00:25:18.798 00:25:18.798 ' 00:25:18.798 22:45:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:18.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.798 --rc genhtml_branch_coverage=1 00:25:18.798 --rc genhtml_function_coverage=1 00:25:18.798 --rc genhtml_legend=1 00:25:18.798 --rc geninfo_all_blocks=1 00:25:18.798 --rc geninfo_unexecuted_blocks=1 00:25:18.798 00:25:18.798 ' 00:25:18.798 22:45:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:18.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.798 --rc genhtml_branch_coverage=1 00:25:18.798 --rc genhtml_function_coverage=1 00:25:18.798 --rc genhtml_legend=1 00:25:18.798 --rc geninfo_all_blocks=1 00:25:18.798 --rc geninfo_unexecuted_blocks=1 00:25:18.798 00:25:18.798 ' 00:25:18.798 22:45:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:18.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.798 --rc genhtml_branch_coverage=1 00:25:18.798 --rc genhtml_function_coverage=1 00:25:18.798 --rc genhtml_legend=1 00:25:18.798 --rc geninfo_all_blocks=1 00:25:18.798 --rc geninfo_unexecuted_blocks=1 00:25:18.798 00:25:18.798 ' 00:25:18.798 22:45:19 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:18.798 22:45:19 -- nvmf/common.sh@7 -- # uname -s 00:25:18.798 22:45:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:18.798 22:45:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:18.798 22:45:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:18.798 22:45:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:18.798 22:45:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:18.799 22:45:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:18.799 22:45:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:18.799 22:45:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:18.799 22:45:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:18.799 22:45:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:18.799 22:45:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:25:18.799 22:45:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:25:18.799 22:45:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:18.799 22:45:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:18.799 22:45:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:18.799 22:45:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:18.799 22:45:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:18.799 22:45:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:18.799 22:45:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:18.799 22:45:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.799 22:45:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.799 22:45:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.799 22:45:19 -- paths/export.sh@5 -- # export PATH 00:25:18.799 22:45:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.799 22:45:19 -- nvmf/common.sh@46 -- # : 0 00:25:18.799 22:45:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:18.799 22:45:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:18.799 22:45:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:18.799 22:45:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:18.799 22:45:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:18.799 22:45:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:18.799 22:45:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:18.799 22:45:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:18.799 22:45:19 -- target/dif.sh@15 -- # NULL_META=16 00:25:18.799 22:45:19 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:18.799 22:45:19 -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:18.799 22:45:19 -- target/dif.sh@15 -- # NULL_DIF=1 00:25:18.799 22:45:19 -- target/dif.sh@135 -- # nvmftestinit 00:25:18.799 22:45:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:18.799 22:45:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:18.799 22:45:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:18.799 22:45:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:18.799 22:45:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:18.799 22:45:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.799 22:45:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:18.799 22:45:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.799 22:45:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:18.799 22:45:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:18.799 22:45:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:18.799 22:45:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:18.799 22:45:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:18.799 22:45:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:18.799 22:45:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:18.799 22:45:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:18.799 22:45:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:18.799 22:45:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:18.799 22:45:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:18.799 22:45:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:18.799 22:45:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:18.799 22:45:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:18.799 22:45:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:18.799 22:45:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:18.799 22:45:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:18.799 22:45:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:18.799 22:45:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:18.799 22:45:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:18.799 Cannot find device "nvmf_tgt_br" 00:25:18.799 22:45:19 -- nvmf/common.sh@154 -- # true 00:25:18.799 22:45:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:18.799 Cannot find device "nvmf_tgt_br2" 00:25:18.799 22:45:19 -- nvmf/common.sh@155 -- # true 00:25:18.799 22:45:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:18.799 22:45:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:18.799 Cannot find device "nvmf_tgt_br" 00:25:18.799 22:45:19 -- nvmf/common.sh@157 -- # true 00:25:18.799 22:45:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:18.799 Cannot find device "nvmf_tgt_br2" 00:25:18.799 22:45:19 -- nvmf/common.sh@158 -- # true 00:25:18.799 22:45:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:18.799 22:45:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:18.799 22:45:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:18.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:18.799 22:45:19 -- nvmf/common.sh@161 -- # true 00:25:18.799 22:45:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:18.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:18.799 22:45:19 -- nvmf/common.sh@162 -- # true 00:25:18.799 22:45:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:18.799 22:45:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:18.799 22:45:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:18.799 22:45:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:18.799 22:45:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:18.799 22:45:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:18.799 22:45:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:18.799 22:45:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:18.799 22:45:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:18.799 22:45:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:18.799 22:45:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:18.799 22:45:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:19.057 22:45:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:19.057 22:45:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:19.057 22:45:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:19.057 22:45:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:19.057 22:45:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:19.057 22:45:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:19.057 22:45:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:19.057 22:45:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:19.057 22:45:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:19.057 22:45:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:19.058 22:45:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:19.058 22:45:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:19.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:19.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:25:19.058 00:25:19.058 --- 10.0.0.2 ping statistics --- 00:25:19.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.058 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:25:19.058 22:45:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:19.058 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:19.058 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:25:19.058 00:25:19.058 --- 10.0.0.3 ping statistics --- 00:25:19.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.058 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:25:19.058 22:45:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:19.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:19.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:25:19.058 00:25:19.058 --- 10.0.0.1 ping statistics --- 00:25:19.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.058 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:25:19.058 22:45:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:19.058 22:45:19 -- nvmf/common.sh@421 -- # return 0 00:25:19.058 22:45:19 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:25:19.058 22:45:19 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:19.316 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:19.316 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:19.316 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:19.316 22:45:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:19.316 22:45:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:19.316 22:45:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:19.316 22:45:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:19.316 22:45:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:19.316 22:45:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:19.316 22:45:20 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:19.316 22:45:20 -- target/dif.sh@137 -- # nvmfappstart 00:25:19.316 22:45:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:19.316 22:45:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:19.316 22:45:20 -- common/autotest_common.sh@10 -- # set +x 00:25:19.575 22:45:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:19.575 22:45:20 -- nvmf/common.sh@469 -- # nvmfpid=101901 00:25:19.575 22:45:20 -- nvmf/common.sh@470 -- # waitforlisten 101901 00:25:19.575 22:45:20 -- common/autotest_common.sh@829 -- # '[' -z 101901 ']' 00:25:19.575 22:45:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.575 22:45:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:19.575 22:45:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.575 22:45:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:19.575 22:45:20 -- common/autotest_common.sh@10 -- # set +x 00:25:19.575 [2024-11-20 22:45:20.095095] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:19.575 [2024-11-20 22:45:20.095175] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:19.575 [2024-11-20 22:45:20.231412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.834 [2024-11-20 22:45:20.323008] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:19.834 [2024-11-20 22:45:20.323216] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:19.834 [2024-11-20 22:45:20.323236] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:19.834 [2024-11-20 22:45:20.323250] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:19.834 [2024-11-20 22:45:20.323313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.402 22:45:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:20.402 22:45:21 -- common/autotest_common.sh@862 -- # return 0 00:25:20.402 22:45:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:20.402 22:45:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:20.402 22:45:21 -- common/autotest_common.sh@10 -- # set +x 00:25:20.661 22:45:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.661 22:45:21 -- target/dif.sh@139 -- # create_transport 00:25:20.661 22:45:21 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:20.661 22:45:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.661 22:45:21 -- common/autotest_common.sh@10 -- # set +x 00:25:20.661 [2024-11-20 22:45:21.168922] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.661 22:45:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.661 22:45:21 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:20.661 22:45:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:20.661 22:45:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:20.661 22:45:21 -- common/autotest_common.sh@10 -- # set +x 00:25:20.661 ************************************ 00:25:20.661 START TEST fio_dif_1_default 00:25:20.661 ************************************ 00:25:20.661 22:45:21 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:25:20.661 22:45:21 -- target/dif.sh@86 -- # create_subsystems 0 00:25:20.661 22:45:21 -- target/dif.sh@28 -- # local sub 00:25:20.661 22:45:21 -- target/dif.sh@30 -- # for sub in "$@" 00:25:20.661 22:45:21 -- target/dif.sh@31 -- # create_subsystem 0 00:25:20.661 22:45:21 -- target/dif.sh@18 -- # local sub_id=0 00:25:20.661 22:45:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:20.661 22:45:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.661 22:45:21 -- common/autotest_common.sh@10 -- # set +x 00:25:20.661 bdev_null0 00:25:20.661 22:45:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.661 22:45:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:20.661 22:45:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.661 22:45:21 -- common/autotest_common.sh@10 -- # set +x 00:25:20.661 22:45:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.661 22:45:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:20.661 22:45:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.661 22:45:21 -- common/autotest_common.sh@10 -- # set +x 00:25:20.661 22:45:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.661 22:45:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:20.661 22:45:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.661 22:45:21 -- common/autotest_common.sh@10 -- # set +x 00:25:20.661 [2024-11-20 22:45:21.217134] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:20.661 22:45:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.661 22:45:21 -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:20.661 22:45:21 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:20.661 22:45:21 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:20.661 22:45:21 -- nvmf/common.sh@520 -- # config=() 00:25:20.661 22:45:21 -- nvmf/common.sh@520 -- # local subsystem config 00:25:20.661 22:45:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:20.661 22:45:21 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:20.661 22:45:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:20.661 { 00:25:20.661 "params": { 00:25:20.661 "name": "Nvme$subsystem", 00:25:20.661 "trtype": "$TEST_TRANSPORT", 00:25:20.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:20.661 "adrfam": "ipv4", 00:25:20.661 "trsvcid": "$NVMF_PORT", 00:25:20.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:20.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:20.661 "hdgst": ${hdgst:-false}, 00:25:20.661 "ddgst": ${ddgst:-false} 00:25:20.661 }, 00:25:20.661 "method": "bdev_nvme_attach_controller" 00:25:20.661 } 00:25:20.661 EOF 00:25:20.661 )") 00:25:20.661 22:45:21 -- target/dif.sh@82 -- # gen_fio_conf 00:25:20.661 22:45:21 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:20.661 22:45:21 -- target/dif.sh@54 -- # local file 00:25:20.661 22:45:21 -- target/dif.sh@56 -- # cat 00:25:20.661 22:45:21 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:20.661 22:45:21 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:20.661 22:45:21 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:20.661 22:45:21 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:20.661 22:45:21 -- common/autotest_common.sh@1330 -- # shift 00:25:20.661 22:45:21 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:20.661 22:45:21 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:20.661 22:45:21 -- nvmf/common.sh@542 -- # cat 00:25:20.661 22:45:21 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:20.661 22:45:21 -- target/dif.sh@72 -- # (( file <= files )) 00:25:20.661 22:45:21 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:20.661 22:45:21 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:20.661 22:45:21 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:20.661 22:45:21 -- nvmf/common.sh@544 -- # jq . 00:25:20.661 22:45:21 -- nvmf/common.sh@545 -- # IFS=, 00:25:20.661 22:45:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:20.661 "params": { 00:25:20.661 "name": "Nvme0", 00:25:20.661 "trtype": "tcp", 00:25:20.661 "traddr": "10.0.0.2", 00:25:20.661 "adrfam": "ipv4", 00:25:20.661 "trsvcid": "4420", 00:25:20.661 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:20.661 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:20.661 "hdgst": false, 00:25:20.661 "ddgst": false 00:25:20.661 }, 00:25:20.661 "method": "bdev_nvme_attach_controller" 00:25:20.661 }' 00:25:20.661 22:45:21 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:20.661 22:45:21 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:20.661 22:45:21 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:20.661 22:45:21 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:20.661 22:45:21 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:20.661 22:45:21 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:20.661 22:45:21 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:20.661 22:45:21 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:20.661 22:45:21 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:20.661 22:45:21 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:20.920 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:20.920 fio-3.35 00:25:20.920 Starting 1 thread 00:25:21.180 [2024-11-20 22:45:21.847162] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:21.180 [2024-11-20 22:45:21.847242] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:33.386 00:25:33.386 filename0: (groupid=0, jobs=1): err= 0: pid=101983: Wed Nov 20 22:45:31 2024 00:25:33.386 read: IOPS=5456, BW=21.3MiB/s (22.3MB/s)(213MiB/10001msec) 00:25:33.386 slat (nsec): min=5760, max=75848, avg=6822.86, stdev=2258.93 00:25:33.386 clat (usec): min=363, max=42399, avg=712.93, stdev=3565.53 00:25:33.386 lat (usec): min=370, max=42407, avg=719.75, stdev=3565.61 00:25:33.386 clat percentiles (usec): 00:25:33.386 | 1.00th=[ 371], 5.00th=[ 375], 10.00th=[ 379], 20.00th=[ 383], 00:25:33.386 | 30.00th=[ 388], 40.00th=[ 392], 50.00th=[ 396], 60.00th=[ 400], 00:25:33.386 | 70.00th=[ 404], 80.00th=[ 412], 90.00th=[ 424], 95.00th=[ 437], 00:25:33.386 | 99.00th=[ 502], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:25:33.386 | 99.99th=[41681] 00:25:33.386 bw ( KiB/s): min=13408, max=29152, per=100.00%, avg=21985.68, stdev=5130.59, samples=19 00:25:33.386 iops : min= 3352, max= 7288, avg=5496.42, stdev=1282.65, samples=19 00:25:33.386 lat (usec) : 500=98.97%, 750=0.22% 00:25:33.386 lat (msec) : 2=0.02%, 4=0.01%, 10=0.01%, 50=0.77% 00:25:33.386 cpu : usr=88.76%, sys=9.12%, ctx=45, majf=0, minf=8 00:25:33.386 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:33.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.386 issued rwts: total=54568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.386 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:33.386 00:25:33.386 Run status group 0 (all jobs): 00:25:33.386 READ: bw=21.3MiB/s (22.3MB/s), 21.3MiB/s-21.3MiB/s (22.3MB/s-22.3MB/s), io=213MiB (224MB), run=10001-10001msec 00:25:33.386 22:45:32 -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:33.386 22:45:32 -- target/dif.sh@43 -- # local sub 00:25:33.386 22:45:32 -- target/dif.sh@45 -- # for sub in "$@" 00:25:33.386 22:45:32 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:33.386 22:45:32 -- target/dif.sh@36 -- # local sub_id=0 00:25:33.386 22:45:32 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:33.386 22:45:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.386 22:45:32 -- common/autotest_common.sh@10 -- # set +x 00:25:33.386 22:45:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.386 22:45:32 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:33.386 22:45:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.386 22:45:32 -- common/autotest_common.sh@10 -- # set +x 00:25:33.386 22:45:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.386 00:25:33.386 real 0m10.994s 00:25:33.386 user 0m9.523s 00:25:33.386 sys 0m1.173s 00:25:33.386 22:45:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:33.386 22:45:32 -- common/autotest_common.sh@10 -- # set +x 00:25:33.386 ************************************ 00:25:33.386 END TEST fio_dif_1_default 00:25:33.386 ************************************ 00:25:33.386 22:45:32 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:33.386 22:45:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:33.386 22:45:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:33.386 22:45:32 -- common/autotest_common.sh@10 -- # set +x 00:25:33.386 ************************************ 00:25:33.386 START TEST fio_dif_1_multi_subsystems 00:25:33.386 ************************************ 00:25:33.386 22:45:32 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:25:33.386 22:45:32 -- target/dif.sh@92 -- # local files=1 00:25:33.386 22:45:32 -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:33.386 22:45:32 -- target/dif.sh@28 -- # local sub 00:25:33.386 22:45:32 -- target/dif.sh@30 -- # for sub in "$@" 00:25:33.386 22:45:32 -- target/dif.sh@31 -- # create_subsystem 0 00:25:33.386 22:45:32 -- target/dif.sh@18 -- # local sub_id=0 00:25:33.386 22:45:32 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:33.386 22:45:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.386 22:45:32 -- common/autotest_common.sh@10 -- # set +x 00:25:33.386 bdev_null0 00:25:33.386 22:45:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.386 22:45:32 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:33.386 22:45:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.386 22:45:32 -- common/autotest_common.sh@10 -- # set +x 00:25:33.386 22:45:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.386 22:45:32 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:33.386 22:45:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.386 22:45:32 -- common/autotest_common.sh@10 -- # set +x 00:25:33.386 22:45:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.386 22:45:32 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:33.386 22:45:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.386 22:45:32 -- common/autotest_common.sh@10 -- # set +x 00:25:33.386 [2024-11-20 22:45:32.257228] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.386 22:45:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.386 22:45:32 -- target/dif.sh@30 -- # for sub in "$@" 00:25:33.386 22:45:32 -- target/dif.sh@31 -- # create_subsystem 1 00:25:33.386 22:45:32 -- target/dif.sh@18 -- # local sub_id=1 00:25:33.386 22:45:32 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:33.386 22:45:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.386 22:45:32 -- common/autotest_common.sh@10 -- # set +x 00:25:33.386 bdev_null1 00:25:33.386 22:45:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.386 22:45:32 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:33.386 22:45:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.386 22:45:32 -- common/autotest_common.sh@10 -- # set +x 00:25:33.386 22:45:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.386 22:45:32 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:33.386 22:45:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.386 22:45:32 -- common/autotest_common.sh@10 -- # set +x 00:25:33.386 22:45:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.386 22:45:32 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:33.386 22:45:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.386 22:45:32 -- common/autotest_common.sh@10 -- # set +x 00:25:33.386 22:45:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.386 22:45:32 -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:33.386 22:45:32 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:33.386 22:45:32 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:33.386 22:45:32 -- nvmf/common.sh@520 -- # config=() 00:25:33.386 22:45:32 -- nvmf/common.sh@520 -- # local subsystem config 00:25:33.386 22:45:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:33.386 22:45:32 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:33.386 22:45:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:33.386 { 00:25:33.386 "params": { 00:25:33.386 "name": "Nvme$subsystem", 00:25:33.386 "trtype": "$TEST_TRANSPORT", 00:25:33.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:33.386 "adrfam": "ipv4", 00:25:33.386 "trsvcid": "$NVMF_PORT", 00:25:33.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:33.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:33.386 "hdgst": ${hdgst:-false}, 00:25:33.386 "ddgst": ${ddgst:-false} 00:25:33.386 }, 00:25:33.386 "method": "bdev_nvme_attach_controller" 00:25:33.386 } 00:25:33.386 EOF 00:25:33.386 )") 00:25:33.386 22:45:32 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:33.386 22:45:32 -- target/dif.sh@82 -- # gen_fio_conf 00:25:33.386 22:45:32 -- target/dif.sh@54 -- # local file 00:25:33.386 22:45:32 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:33.386 22:45:32 -- target/dif.sh@56 -- # cat 00:25:33.386 22:45:32 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:33.386 22:45:32 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:33.386 22:45:32 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:33.386 22:45:32 -- common/autotest_common.sh@1330 -- # shift 00:25:33.386 22:45:32 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:33.386 22:45:32 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:33.386 22:45:32 -- nvmf/common.sh@542 -- # cat 00:25:33.386 22:45:32 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:33.386 22:45:32 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:33.386 22:45:32 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:33.387 22:45:32 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:33.387 22:45:32 -- target/dif.sh@72 -- # (( file <= files )) 00:25:33.387 22:45:32 -- target/dif.sh@73 -- # cat 00:25:33.387 22:45:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:33.387 22:45:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:33.387 { 00:25:33.387 "params": { 00:25:33.387 "name": "Nvme$subsystem", 00:25:33.387 "trtype": "$TEST_TRANSPORT", 00:25:33.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:33.387 "adrfam": "ipv4", 00:25:33.387 "trsvcid": "$NVMF_PORT", 00:25:33.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:33.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:33.387 "hdgst": ${hdgst:-false}, 00:25:33.387 "ddgst": ${ddgst:-false} 00:25:33.387 }, 00:25:33.387 "method": "bdev_nvme_attach_controller" 00:25:33.387 } 00:25:33.387 EOF 00:25:33.387 )") 00:25:33.387 22:45:32 -- target/dif.sh@72 -- # (( file++ )) 00:25:33.387 22:45:32 -- target/dif.sh@72 -- # (( file <= files )) 00:25:33.387 22:45:32 -- nvmf/common.sh@542 -- # cat 00:25:33.387 22:45:32 -- nvmf/common.sh@544 -- # jq . 00:25:33.387 22:45:32 -- nvmf/common.sh@545 -- # IFS=, 00:25:33.387 22:45:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:33.387 "params": { 00:25:33.387 "name": "Nvme0", 00:25:33.387 "trtype": "tcp", 00:25:33.387 "traddr": "10.0.0.2", 00:25:33.387 "adrfam": "ipv4", 00:25:33.387 "trsvcid": "4420", 00:25:33.387 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:33.387 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:33.387 "hdgst": false, 00:25:33.387 "ddgst": false 00:25:33.387 }, 00:25:33.387 "method": "bdev_nvme_attach_controller" 00:25:33.387 },{ 00:25:33.387 "params": { 00:25:33.387 "name": "Nvme1", 00:25:33.387 "trtype": "tcp", 00:25:33.387 "traddr": "10.0.0.2", 00:25:33.387 "adrfam": "ipv4", 00:25:33.387 "trsvcid": "4420", 00:25:33.387 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.387 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:33.387 "hdgst": false, 00:25:33.387 "ddgst": false 00:25:33.387 }, 00:25:33.387 "method": "bdev_nvme_attach_controller" 00:25:33.387 }' 00:25:33.387 22:45:32 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:33.387 22:45:32 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:33.387 22:45:32 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:33.387 22:45:32 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:33.387 22:45:32 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:33.387 22:45:32 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:33.387 22:45:32 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:33.387 22:45:32 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:33.387 22:45:32 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:33.387 22:45:32 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:33.387 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:33.387 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:33.387 fio-3.35 00:25:33.387 Starting 2 threads 00:25:33.387 [2024-11-20 22:45:33.037450] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:33.387 [2024-11-20 22:45:33.037523] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:43.392 00:25:43.392 filename0: (groupid=0, jobs=1): err= 0: pid=102150: Wed Nov 20 22:45:43 2024 00:25:43.392 read: IOPS=1359, BW=5437KiB/s (5568kB/s)(53.2MiB/10020msec) 00:25:43.392 slat (nsec): min=5786, max=55563, avg=7469.13, stdev=3136.74 00:25:43.392 clat (usec): min=371, max=41841, avg=2920.67, stdev=9706.46 00:25:43.392 lat (usec): min=377, max=41853, avg=2928.14, stdev=9706.52 00:25:43.392 clat percentiles (usec): 00:25:43.392 | 1.00th=[ 383], 5.00th=[ 388], 10.00th=[ 392], 20.00th=[ 396], 00:25:43.392 | 30.00th=[ 404], 40.00th=[ 408], 50.00th=[ 416], 60.00th=[ 424], 00:25:43.392 | 70.00th=[ 437], 80.00th=[ 457], 90.00th=[ 701], 95.00th=[40633], 00:25:43.392 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:25:43.392 | 99.99th=[41681] 00:25:43.392 bw ( KiB/s): min= 2912, max= 8832, per=54.38%, avg=5446.40, stdev=1445.50, samples=20 00:25:43.392 iops : min= 728, max= 2208, avg=1361.60, stdev=361.37, samples=20 00:25:43.392 lat (usec) : 500=83.22%, 750=10.31%, 1000=0.18% 00:25:43.392 lat (msec) : 2=0.18%, 50=6.11% 00:25:43.392 cpu : usr=94.39%, sys=4.84%, ctx=26, majf=0, minf=0 00:25:43.392 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:43.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.392 issued rwts: total=13620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.392 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:43.392 filename1: (groupid=0, jobs=1): err= 0: pid=102151: Wed Nov 20 22:45:43 2024 00:25:43.392 read: IOPS=1146, BW=4584KiB/s (4694kB/s)(44.9MiB/10034msec) 00:25:43.392 slat (nsec): min=5790, max=39881, avg=7451.64, stdev=2986.30 00:25:43.392 clat (usec): min=370, max=41669, avg=3467.86, stdev=10640.28 00:25:43.392 lat (usec): min=377, max=41681, avg=3475.31, stdev=10640.36 00:25:43.392 clat percentiles (usec): 00:25:43.392 | 1.00th=[ 379], 5.00th=[ 383], 10.00th=[ 392], 20.00th=[ 396], 00:25:43.392 | 30.00th=[ 404], 40.00th=[ 412], 50.00th=[ 420], 60.00th=[ 429], 00:25:43.392 | 70.00th=[ 437], 80.00th=[ 482], 90.00th=[ 709], 95.00th=[40633], 00:25:43.392 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:25:43.392 | 99.99th=[41681] 00:25:43.392 bw ( KiB/s): min= 2752, max= 7840, per=45.92%, avg=4598.40, stdev=1304.73, samples=20 00:25:43.392 iops : min= 688, max= 1960, avg=1149.60, stdev=326.18, samples=20 00:25:43.392 lat (usec) : 500=80.61%, 750=11.44%, 1000=0.27% 00:25:43.392 lat (msec) : 2=0.23%, 50=7.44% 00:25:43.392 cpu : usr=95.07%, sys=4.23%, ctx=11, majf=0, minf=9 00:25:43.392 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:43.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.392 issued rwts: total=11500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.392 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:43.392 00:25:43.392 Run status group 0 (all jobs): 00:25:43.392 READ: bw=9.78MiB/s (10.3MB/s), 4584KiB/s-5437KiB/s (4694kB/s-5568kB/s), io=98.1MiB (103MB), run=10020-10034msec 00:25:43.392 22:45:43 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:43.392 22:45:43 -- target/dif.sh@43 -- # local sub 00:25:43.392 22:45:43 -- target/dif.sh@45 -- # for sub in "$@" 00:25:43.392 22:45:43 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:43.392 22:45:43 -- target/dif.sh@36 -- # local sub_id=0 00:25:43.392 22:45:43 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:43.392 22:45:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.392 22:45:43 -- common/autotest_common.sh@10 -- # set +x 00:25:43.392 22:45:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.392 22:45:43 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:43.392 22:45:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.392 22:45:43 -- common/autotest_common.sh@10 -- # set +x 00:25:43.392 22:45:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.392 22:45:43 -- target/dif.sh@45 -- # for sub in "$@" 00:25:43.392 22:45:43 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:43.392 22:45:43 -- target/dif.sh@36 -- # local sub_id=1 00:25:43.392 22:45:43 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:43.392 22:45:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.392 22:45:43 -- common/autotest_common.sh@10 -- # set +x 00:25:43.392 22:45:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.392 22:45:43 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:43.392 22:45:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.392 22:45:43 -- common/autotest_common.sh@10 -- # set +x 00:25:43.392 22:45:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.392 00:25:43.392 real 0m11.196s 00:25:43.392 user 0m19.788s 00:25:43.392 sys 0m1.195s 00:25:43.392 22:45:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:43.392 ************************************ 00:25:43.392 END TEST fio_dif_1_multi_subsystems 00:25:43.392 22:45:43 -- common/autotest_common.sh@10 -- # set +x 00:25:43.392 ************************************ 00:25:43.392 22:45:43 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:43.392 22:45:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:43.392 22:45:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:43.392 22:45:43 -- common/autotest_common.sh@10 -- # set +x 00:25:43.392 ************************************ 00:25:43.392 START TEST fio_dif_rand_params 00:25:43.392 ************************************ 00:25:43.392 22:45:43 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:25:43.392 22:45:43 -- target/dif.sh@100 -- # local NULL_DIF 00:25:43.392 22:45:43 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:43.392 22:45:43 -- target/dif.sh@103 -- # NULL_DIF=3 00:25:43.392 22:45:43 -- target/dif.sh@103 -- # bs=128k 00:25:43.392 22:45:43 -- target/dif.sh@103 -- # numjobs=3 00:25:43.392 22:45:43 -- target/dif.sh@103 -- # iodepth=3 00:25:43.392 22:45:43 -- target/dif.sh@103 -- # runtime=5 00:25:43.393 22:45:43 -- target/dif.sh@105 -- # create_subsystems 0 00:25:43.393 22:45:43 -- target/dif.sh@28 -- # local sub 00:25:43.393 22:45:43 -- target/dif.sh@30 -- # for sub in "$@" 00:25:43.393 22:45:43 -- target/dif.sh@31 -- # create_subsystem 0 00:25:43.393 22:45:43 -- target/dif.sh@18 -- # local sub_id=0 00:25:43.393 22:45:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:43.393 22:45:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.393 22:45:43 -- common/autotest_common.sh@10 -- # set +x 00:25:43.393 bdev_null0 00:25:43.393 22:45:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.393 22:45:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:43.393 22:45:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.393 22:45:43 -- common/autotest_common.sh@10 -- # set +x 00:25:43.393 22:45:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.393 22:45:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:43.393 22:45:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.393 22:45:43 -- common/autotest_common.sh@10 -- # set +x 00:25:43.393 22:45:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.393 22:45:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:43.393 22:45:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.393 22:45:43 -- common/autotest_common.sh@10 -- # set +x 00:25:43.393 [2024-11-20 22:45:43.511624] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.393 22:45:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.393 22:45:43 -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:43.393 22:45:43 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:43.393 22:45:43 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:43.393 22:45:43 -- nvmf/common.sh@520 -- # config=() 00:25:43.393 22:45:43 -- nvmf/common.sh@520 -- # local subsystem config 00:25:43.393 22:45:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:43.393 22:45:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:43.393 { 00:25:43.393 "params": { 00:25:43.393 "name": "Nvme$subsystem", 00:25:43.393 "trtype": "$TEST_TRANSPORT", 00:25:43.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:43.393 "adrfam": "ipv4", 00:25:43.393 "trsvcid": "$NVMF_PORT", 00:25:43.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:43.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:43.393 "hdgst": ${hdgst:-false}, 00:25:43.393 "ddgst": ${ddgst:-false} 00:25:43.393 }, 00:25:43.393 "method": "bdev_nvme_attach_controller" 00:25:43.393 } 00:25:43.393 EOF 00:25:43.393 )") 00:25:43.393 22:45:43 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:43.393 22:45:43 -- target/dif.sh@82 -- # gen_fio_conf 00:25:43.393 22:45:43 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:43.393 22:45:43 -- target/dif.sh@54 -- # local file 00:25:43.393 22:45:43 -- target/dif.sh@56 -- # cat 00:25:43.393 22:45:43 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:43.393 22:45:43 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:43.393 22:45:43 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:43.393 22:45:43 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:43.393 22:45:43 -- nvmf/common.sh@542 -- # cat 00:25:43.393 22:45:43 -- common/autotest_common.sh@1330 -- # shift 00:25:43.393 22:45:43 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:43.393 22:45:43 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:43.393 22:45:43 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:43.393 22:45:43 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:43.393 22:45:43 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:43.393 22:45:43 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:43.393 22:45:43 -- target/dif.sh@72 -- # (( file <= files )) 00:25:43.393 22:45:43 -- nvmf/common.sh@544 -- # jq . 00:25:43.393 22:45:43 -- nvmf/common.sh@545 -- # IFS=, 00:25:43.393 22:45:43 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:43.393 "params": { 00:25:43.393 "name": "Nvme0", 00:25:43.393 "trtype": "tcp", 00:25:43.393 "traddr": "10.0.0.2", 00:25:43.393 "adrfam": "ipv4", 00:25:43.393 "trsvcid": "4420", 00:25:43.393 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:43.393 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:43.393 "hdgst": false, 00:25:43.393 "ddgst": false 00:25:43.393 }, 00:25:43.393 "method": "bdev_nvme_attach_controller" 00:25:43.393 }' 00:25:43.393 22:45:43 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:43.393 22:45:43 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:43.393 22:45:43 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:43.393 22:45:43 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:43.393 22:45:43 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:43.393 22:45:43 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:43.393 22:45:43 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:43.393 22:45:43 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:43.393 22:45:43 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:43.393 22:45:43 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:43.393 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:43.393 ... 00:25:43.393 fio-3.35 00:25:43.393 Starting 3 threads 00:25:43.652 [2024-11-20 22:45:44.123415] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:43.652 [2024-11-20 22:45:44.123492] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:48.921 00:25:48.921 filename0: (groupid=0, jobs=1): err= 0: pid=102307: Wed Nov 20 22:45:49 2024 00:25:48.921 read: IOPS=259, BW=32.4MiB/s (34.0MB/s)(162MiB/5006msec) 00:25:48.921 slat (nsec): min=5951, max=44226, avg=13073.55, stdev=5882.95 00:25:48.921 clat (usec): min=3213, max=50685, avg=11538.66, stdev=11081.55 00:25:48.921 lat (usec): min=3223, max=50703, avg=11551.73, stdev=11081.60 00:25:48.921 clat percentiles (usec): 00:25:48.921 | 1.00th=[ 5342], 5.00th=[ 6063], 10.00th=[ 6325], 20.00th=[ 6915], 00:25:48.921 | 30.00th=[ 8094], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 8979], 00:25:48.921 | 70.00th=[ 9241], 80.00th=[ 9372], 90.00th=[10028], 95.00th=[48497], 00:25:48.921 | 99.00th=[50070], 99.50th=[50070], 99.90th=[50594], 99.95th=[50594], 00:25:48.921 | 99.99th=[50594] 00:25:48.921 bw ( KiB/s): min=24064, max=43776, per=30.21%, avg=33203.20, stdev=6912.58, samples=10 00:25:48.921 iops : min= 188, max= 342, avg=259.40, stdev=54.00, samples=10 00:25:48.921 lat (msec) : 4=0.69%, 10=89.30%, 20=1.92%, 50=7.47%, 100=0.62% 00:25:48.921 cpu : usr=93.83%, sys=4.70%, ctx=5, majf=0, minf=0 00:25:48.921 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:48.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.921 issued rwts: total=1299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.921 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:48.921 filename0: (groupid=0, jobs=1): err= 0: pid=102308: Wed Nov 20 22:45:49 2024 00:25:48.921 read: IOPS=249, BW=31.1MiB/s (32.7MB/s)(156MiB/5005msec) 00:25:48.921 slat (nsec): min=5861, max=58783, avg=12704.48, stdev=6448.90 00:25:48.921 clat (usec): min=4953, max=53771, avg=12019.47, stdev=10679.63 00:25:48.921 lat (usec): min=4972, max=53796, avg=12032.17, stdev=10679.85 00:25:48.921 clat percentiles (usec): 00:25:48.922 | 1.00th=[ 5866], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6783], 00:25:48.922 | 30.00th=[ 8455], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10159], 00:25:48.922 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11338], 95.00th=[47449], 00:25:48.922 | 99.00th=[51643], 99.50th=[52167], 99.90th=[53740], 99.95th=[53740], 00:25:48.922 | 99.99th=[53740] 00:25:48.922 bw ( KiB/s): min=23808, max=40704, per=29.40%, avg=32312.89, stdev=5211.62, samples=9 00:25:48.922 iops : min= 186, max= 318, avg=252.44, stdev=40.72, samples=9 00:25:48.922 lat (msec) : 10=54.13%, 20=38.41%, 50=4.89%, 100=2.57% 00:25:48.922 cpu : usr=94.28%, sys=4.28%, ctx=7, majf=0, minf=9 00:25:48.922 IO depths : 1=4.7%, 2=95.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:48.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.922 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.922 issued rwts: total=1247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.922 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:48.922 filename0: (groupid=0, jobs=1): err= 0: pid=102309: Wed Nov 20 22:45:49 2024 00:25:48.922 read: IOPS=350, BW=43.8MiB/s (45.9MB/s)(219MiB/5004msec) 00:25:48.922 slat (nsec): min=6263, max=58930, avg=14290.93, stdev=5997.29 00:25:48.922 clat (usec): min=3120, max=47630, avg=8545.12, stdev=3855.05 00:25:48.922 lat (usec): min=3129, max=47640, avg=8559.41, stdev=3856.74 00:25:48.922 clat percentiles (usec): 00:25:48.922 | 1.00th=[ 3359], 5.00th=[ 3523], 10.00th=[ 3556], 20.00th=[ 3687], 00:25:48.922 | 30.00th=[ 6980], 40.00th=[ 7373], 50.00th=[ 7767], 60.00th=[ 9372], 00:25:48.922 | 70.00th=[11863], 80.00th=[12518], 90.00th=[13042], 95.00th=[13304], 00:25:48.922 | 99.00th=[13960], 99.50th=[14091], 99.90th=[44827], 99.95th=[47449], 00:25:48.922 | 99.99th=[47449] 00:25:48.922 bw ( KiB/s): min=30781, max=56832, per=40.58%, avg=44607.67, stdev=9820.93, samples=9 00:25:48.922 iops : min= 240, max= 444, avg=348.44, stdev=76.81, samples=9 00:25:48.922 lat (msec) : 4=22.65%, 10=39.02%, 20=38.16%, 50=0.17% 00:25:48.922 cpu : usr=93.94%, sys=4.40%, ctx=13, majf=0, minf=11 00:25:48.922 IO depths : 1=2.1%, 2=97.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:48.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.922 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.922 issued rwts: total=1753,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.922 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:48.922 00:25:48.922 Run status group 0 (all jobs): 00:25:48.922 READ: bw=107MiB/s (113MB/s), 31.1MiB/s-43.8MiB/s (32.7MB/s-45.9MB/s), io=537MiB (563MB), run=5004-5006msec 00:25:48.922 22:45:49 -- target/dif.sh@107 -- # destroy_subsystems 0 00:25:48.922 22:45:49 -- target/dif.sh@43 -- # local sub 00:25:48.922 22:45:49 -- target/dif.sh@45 -- # for sub in "$@" 00:25:48.922 22:45:49 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:48.922 22:45:49 -- target/dif.sh@36 -- # local sub_id=0 00:25:48.922 22:45:49 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:48.922 22:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.922 22:45:49 -- common/autotest_common.sh@10 -- # set +x 00:25:48.922 22:45:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.922 22:45:49 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:48.922 22:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.922 22:45:49 -- common/autotest_common.sh@10 -- # set +x 00:25:48.922 22:45:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.922 22:45:49 -- target/dif.sh@109 -- # NULL_DIF=2 00:25:48.922 22:45:49 -- target/dif.sh@109 -- # bs=4k 00:25:48.922 22:45:49 -- target/dif.sh@109 -- # numjobs=8 00:25:48.922 22:45:49 -- target/dif.sh@109 -- # iodepth=16 00:25:48.922 22:45:49 -- target/dif.sh@109 -- # runtime= 00:25:48.922 22:45:49 -- target/dif.sh@109 -- # files=2 00:25:48.922 22:45:49 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:25:48.922 22:45:49 -- target/dif.sh@28 -- # local sub 00:25:48.922 22:45:49 -- target/dif.sh@30 -- # for sub in "$@" 00:25:48.922 22:45:49 -- target/dif.sh@31 -- # create_subsystem 0 00:25:48.922 22:45:49 -- target/dif.sh@18 -- # local sub_id=0 00:25:48.922 22:45:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:25:48.922 22:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.922 22:45:49 -- common/autotest_common.sh@10 -- # set +x 00:25:48.922 bdev_null0 00:25:48.922 22:45:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.922 22:45:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:48.922 22:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.922 22:45:49 -- common/autotest_common.sh@10 -- # set +x 00:25:48.922 22:45:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.922 22:45:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:48.922 22:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.922 22:45:49 -- common/autotest_common.sh@10 -- # set +x 00:25:48.922 22:45:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.922 22:45:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:48.922 22:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.922 22:45:49 -- common/autotest_common.sh@10 -- # set +x 00:25:48.922 [2024-11-20 22:45:49.492940] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:48.922 22:45:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.922 22:45:49 -- target/dif.sh@30 -- # for sub in "$@" 00:25:48.922 22:45:49 -- target/dif.sh@31 -- # create_subsystem 1 00:25:48.922 22:45:49 -- target/dif.sh@18 -- # local sub_id=1 00:25:48.922 22:45:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:25:48.922 22:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.922 22:45:49 -- common/autotest_common.sh@10 -- # set +x 00:25:48.922 bdev_null1 00:25:48.922 22:45:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.922 22:45:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:48.922 22:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.922 22:45:49 -- common/autotest_common.sh@10 -- # set +x 00:25:48.922 22:45:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.922 22:45:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:48.922 22:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.922 22:45:49 -- common/autotest_common.sh@10 -- # set +x 00:25:48.922 22:45:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.922 22:45:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:48.922 22:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.922 22:45:49 -- common/autotest_common.sh@10 -- # set +x 00:25:48.922 22:45:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.922 22:45:49 -- target/dif.sh@30 -- # for sub in "$@" 00:25:48.922 22:45:49 -- target/dif.sh@31 -- # create_subsystem 2 00:25:48.922 22:45:49 -- target/dif.sh@18 -- # local sub_id=2 00:25:48.922 22:45:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:25:48.922 22:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.922 22:45:49 -- common/autotest_common.sh@10 -- # set +x 00:25:48.922 bdev_null2 00:25:48.922 22:45:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.922 22:45:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:25:48.922 22:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.922 22:45:49 -- common/autotest_common.sh@10 -- # set +x 00:25:48.922 22:45:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.922 22:45:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:25:48.922 22:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.922 22:45:49 -- common/autotest_common.sh@10 -- # set +x 00:25:48.922 22:45:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.922 22:45:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:48.922 22:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.922 22:45:49 -- common/autotest_common.sh@10 -- # set +x 00:25:48.922 22:45:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.922 22:45:49 -- target/dif.sh@112 -- # fio /dev/fd/62 00:25:48.923 22:45:49 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:25:48.923 22:45:49 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:25:48.923 22:45:49 -- nvmf/common.sh@520 -- # config=() 00:25:48.923 22:45:49 -- nvmf/common.sh@520 -- # local subsystem config 00:25:48.923 22:45:49 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:48.923 22:45:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:48.923 22:45:49 -- target/dif.sh@82 -- # gen_fio_conf 00:25:48.923 22:45:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:48.923 { 00:25:48.923 "params": { 00:25:48.923 "name": "Nvme$subsystem", 00:25:48.923 "trtype": "$TEST_TRANSPORT", 00:25:48.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:48.923 "adrfam": "ipv4", 00:25:48.923 "trsvcid": "$NVMF_PORT", 00:25:48.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:48.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:48.923 "hdgst": ${hdgst:-false}, 00:25:48.923 "ddgst": ${ddgst:-false} 00:25:48.923 }, 00:25:48.923 "method": "bdev_nvme_attach_controller" 00:25:48.923 } 00:25:48.923 EOF 00:25:48.923 )") 00:25:48.923 22:45:49 -- target/dif.sh@54 -- # local file 00:25:48.923 22:45:49 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:48.923 22:45:49 -- target/dif.sh@56 -- # cat 00:25:48.923 22:45:49 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:48.923 22:45:49 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:48.923 22:45:49 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:48.923 22:45:49 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:48.923 22:45:49 -- common/autotest_common.sh@1330 -- # shift 00:25:48.923 22:45:49 -- nvmf/common.sh@542 -- # cat 00:25:48.923 22:45:49 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:48.923 22:45:49 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:48.923 22:45:49 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:48.923 22:45:49 -- target/dif.sh@72 -- # (( file <= files )) 00:25:48.923 22:45:49 -- target/dif.sh@73 -- # cat 00:25:48.923 22:45:49 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:48.923 22:45:49 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:48.923 22:45:49 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:48.923 22:45:49 -- target/dif.sh@72 -- # (( file++ )) 00:25:48.923 22:45:49 -- target/dif.sh@72 -- # (( file <= files )) 00:25:48.923 22:45:49 -- target/dif.sh@73 -- # cat 00:25:48.923 22:45:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:48.923 22:45:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:48.923 { 00:25:48.923 "params": { 00:25:48.923 "name": "Nvme$subsystem", 00:25:48.923 "trtype": "$TEST_TRANSPORT", 00:25:48.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:48.923 "adrfam": "ipv4", 00:25:48.923 "trsvcid": "$NVMF_PORT", 00:25:48.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:48.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:48.923 "hdgst": ${hdgst:-false}, 00:25:48.923 "ddgst": ${ddgst:-false} 00:25:48.923 }, 00:25:48.923 "method": "bdev_nvme_attach_controller" 00:25:48.923 } 00:25:48.923 EOF 00:25:48.923 )") 00:25:48.923 22:45:49 -- nvmf/common.sh@542 -- # cat 00:25:48.923 22:45:49 -- target/dif.sh@72 -- # (( file++ )) 00:25:48.923 22:45:49 -- target/dif.sh@72 -- # (( file <= files )) 00:25:48.923 22:45:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:48.923 22:45:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:48.923 { 00:25:48.923 "params": { 00:25:48.923 "name": "Nvme$subsystem", 00:25:48.923 "trtype": "$TEST_TRANSPORT", 00:25:48.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:48.923 "adrfam": "ipv4", 00:25:48.923 "trsvcid": "$NVMF_PORT", 00:25:48.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:48.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:48.923 "hdgst": ${hdgst:-false}, 00:25:48.923 "ddgst": ${ddgst:-false} 00:25:48.923 }, 00:25:48.923 "method": "bdev_nvme_attach_controller" 00:25:48.923 } 00:25:48.923 EOF 00:25:48.923 )") 00:25:48.923 22:45:49 -- nvmf/common.sh@542 -- # cat 00:25:48.923 22:45:49 -- nvmf/common.sh@544 -- # jq . 00:25:48.923 22:45:49 -- nvmf/common.sh@545 -- # IFS=, 00:25:48.923 22:45:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:48.923 "params": { 00:25:48.923 "name": "Nvme0", 00:25:48.923 "trtype": "tcp", 00:25:48.923 "traddr": "10.0.0.2", 00:25:48.923 "adrfam": "ipv4", 00:25:48.923 "trsvcid": "4420", 00:25:48.923 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:48.923 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:48.923 "hdgst": false, 00:25:48.923 "ddgst": false 00:25:48.923 }, 00:25:48.923 "method": "bdev_nvme_attach_controller" 00:25:48.923 },{ 00:25:48.923 "params": { 00:25:48.923 "name": "Nvme1", 00:25:48.923 "trtype": "tcp", 00:25:48.923 "traddr": "10.0.0.2", 00:25:48.923 "adrfam": "ipv4", 00:25:48.923 "trsvcid": "4420", 00:25:48.923 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:48.923 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:48.923 "hdgst": false, 00:25:48.923 "ddgst": false 00:25:48.923 }, 00:25:48.923 "method": "bdev_nvme_attach_controller" 00:25:48.923 },{ 00:25:48.923 "params": { 00:25:48.923 "name": "Nvme2", 00:25:48.923 "trtype": "tcp", 00:25:48.923 "traddr": "10.0.0.2", 00:25:48.923 "adrfam": "ipv4", 00:25:48.923 "trsvcid": "4420", 00:25:48.923 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:48.923 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:48.923 "hdgst": false, 00:25:48.923 "ddgst": false 00:25:48.923 }, 00:25:48.923 "method": "bdev_nvme_attach_controller" 00:25:48.923 }' 00:25:48.923 22:45:49 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:48.923 22:45:49 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:48.923 22:45:49 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:48.923 22:45:49 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:48.923 22:45:49 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:48.923 22:45:49 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:48.923 22:45:49 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:48.923 22:45:49 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:48.923 22:45:49 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:48.923 22:45:49 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:49.182 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:49.182 ... 00:25:49.182 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:49.182 ... 00:25:49.182 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:49.182 ... 00:25:49.182 fio-3.35 00:25:49.182 Starting 24 threads 00:25:49.751 [2024-11-20 22:45:50.384817] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:49.751 [2024-11-20 22:45:50.384872] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:01.960 00:26:01.960 filename0: (groupid=0, jobs=1): err= 0: pid=102404: Wed Nov 20 22:46:00 2024 00:26:01.960 read: IOPS=247, BW=992KiB/s (1015kB/s)(9940KiB/10024msec) 00:26:01.960 slat (usec): min=3, max=8043, avg=20.80, stdev=254.46 00:26:01.960 clat (msec): min=24, max=138, avg=64.43, stdev=18.02 00:26:01.960 lat (msec): min=24, max=138, avg=64.45, stdev=18.03 00:26:01.960 clat percentiles (msec): 00:26:01.960 | 1.00th=[ 33], 5.00th=[ 37], 10.00th=[ 43], 20.00th=[ 48], 00:26:01.960 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 62], 60.00th=[ 68], 00:26:01.960 | 70.00th=[ 75], 80.00th=[ 81], 90.00th=[ 88], 95.00th=[ 97], 00:26:01.960 | 99.00th=[ 118], 99.50th=[ 118], 99.90th=[ 140], 99.95th=[ 140], 00:26:01.960 | 99.99th=[ 140] 00:26:01.960 bw ( KiB/s): min= 640, max= 1248, per=4.09%, avg=987.40, stdev=176.55, samples=20 00:26:01.960 iops : min= 160, max= 312, avg=246.80, stdev=44.14, samples=20 00:26:01.960 lat (msec) : 50=21.89%, 100=74.69%, 250=3.42% 00:26:01.960 cpu : usr=35.51%, sys=0.59%, ctx=1269, majf=0, minf=9 00:26:01.960 IO depths : 1=1.2%, 2=2.7%, 4=9.9%, 8=73.8%, 16=12.5%, 32=0.0%, >=64=0.0% 00:26:01.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.960 complete : 0=0.0%, 4=90.2%, 8=5.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.960 issued rwts: total=2485,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.960 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.960 filename0: (groupid=0, jobs=1): err= 0: pid=102405: Wed Nov 20 22:46:00 2024 00:26:01.960 read: IOPS=298, BW=1193KiB/s (1221kB/s)(11.7MiB/10008msec) 00:26:01.960 slat (usec): min=4, max=8026, avg=14.59, stdev=146.90 00:26:01.960 clat (msec): min=2, max=119, avg=53.57, stdev=19.17 00:26:01.960 lat (msec): min=2, max=119, avg=53.59, stdev=19.17 00:26:01.960 clat percentiles (msec): 00:26:01.960 | 1.00th=[ 4], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 39], 00:26:01.961 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 54], 60.00th=[ 58], 00:26:01.961 | 70.00th=[ 61], 80.00th=[ 67], 90.00th=[ 81], 95.00th=[ 87], 00:26:01.961 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 121], 99.95th=[ 121], 00:26:01.961 | 99.99th=[ 121] 00:26:01.961 bw ( KiB/s): min= 784, max= 1747, per=4.91%, avg=1186.95, stdev=245.03, samples=20 00:26:01.961 iops : min= 196, max= 436, avg=296.55, stdev=61.21, samples=20 00:26:01.961 lat (msec) : 4=1.07%, 10=2.14%, 50=44.50%, 100=49.46%, 250=2.82% 00:26:01.961 cpu : usr=41.04%, sys=0.68%, ctx=1075, majf=0, minf=9 00:26:01.961 IO depths : 1=1.0%, 2=2.0%, 4=8.5%, 8=75.9%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:01.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.961 complete : 0=0.0%, 4=89.6%, 8=5.9%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.961 issued rwts: total=2984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.961 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.961 filename0: (groupid=0, jobs=1): err= 0: pid=102406: Wed Nov 20 22:46:00 2024 00:26:01.961 read: IOPS=229, BW=917KiB/s (939kB/s)(9188KiB/10019msec) 00:26:01.961 slat (usec): min=4, max=8033, avg=26.28, stdev=291.15 00:26:01.961 clat (msec): min=28, max=152, avg=69.58, stdev=18.43 00:26:01.961 lat (msec): min=28, max=152, avg=69.60, stdev=18.45 00:26:01.961 clat percentiles (msec): 00:26:01.961 | 1.00th=[ 33], 5.00th=[ 43], 10.00th=[ 51], 20.00th=[ 58], 00:26:01.961 | 30.00th=[ 60], 40.00th=[ 62], 50.00th=[ 65], 60.00th=[ 72], 00:26:01.961 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 106], 00:26:01.961 | 99.00th=[ 133], 99.50th=[ 140], 99.90th=[ 146], 99.95th=[ 146], 00:26:01.961 | 99.99th=[ 153] 00:26:01.961 bw ( KiB/s): min= 640, max= 1304, per=3.78%, avg=912.40, stdev=158.34, samples=20 00:26:01.961 iops : min= 160, max= 326, avg=228.10, stdev=39.58, samples=20 00:26:01.961 lat (msec) : 50=9.62%, 100=84.28%, 250=6.09% 00:26:01.961 cpu : usr=38.53%, sys=0.55%, ctx=1167, majf=0, minf=9 00:26:01.961 IO depths : 1=1.9%, 2=4.7%, 4=14.6%, 8=67.4%, 16=11.4%, 32=0.0%, >=64=0.0% 00:26:01.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.961 complete : 0=0.0%, 4=91.0%, 8=4.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.961 issued rwts: total=2297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.961 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.961 filename0: (groupid=0, jobs=1): err= 0: pid=102407: Wed Nov 20 22:46:00 2024 00:26:01.961 read: IOPS=231, BW=927KiB/s (949kB/s)(9284KiB/10014msec) 00:26:01.961 slat (usec): min=4, max=1853, avg=13.30, stdev=38.95 00:26:01.961 clat (msec): min=26, max=161, avg=68.92, stdev=20.08 00:26:01.961 lat (msec): min=26, max=161, avg=68.94, stdev=20.08 00:26:01.961 clat percentiles (msec): 00:26:01.961 | 1.00th=[ 35], 5.00th=[ 43], 10.00th=[ 46], 20.00th=[ 54], 00:26:01.961 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 72], 00:26:01.961 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 108], 00:26:01.961 | 99.00th=[ 132], 99.50th=[ 150], 99.90th=[ 150], 99.95th=[ 159], 00:26:01.961 | 99.99th=[ 161] 00:26:01.961 bw ( KiB/s): min= 592, max= 1120, per=3.79%, avg=914.11, stdev=160.40, samples=19 00:26:01.961 iops : min= 148, max= 280, avg=228.53, stdev=40.10, samples=19 00:26:01.961 lat (msec) : 50=18.48%, 100=74.19%, 250=7.32% 00:26:01.961 cpu : usr=32.65%, sys=0.43%, ctx=881, majf=0, minf=9 00:26:01.961 IO depths : 1=1.1%, 2=2.5%, 4=10.3%, 8=73.2%, 16=12.9%, 32=0.0%, >=64=0.0% 00:26:01.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.961 complete : 0=0.0%, 4=90.4%, 8=5.3%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.961 issued rwts: total=2321,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.961 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.961 filename0: (groupid=0, jobs=1): err= 0: pid=102408: Wed Nov 20 22:46:00 2024 00:26:01.961 read: IOPS=225, BW=900KiB/s (922kB/s)(9008KiB/10004msec) 00:26:01.961 slat (usec): min=4, max=8049, avg=18.02, stdev=189.50 00:26:01.961 clat (msec): min=7, max=142, avg=70.95, stdev=20.76 00:26:01.961 lat (msec): min=7, max=142, avg=70.97, stdev=20.76 00:26:01.961 clat percentiles (msec): 00:26:01.961 | 1.00th=[ 25], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 56], 00:26:01.961 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 73], 00:26:01.961 | 70.00th=[ 82], 80.00th=[ 87], 90.00th=[ 97], 95.00th=[ 109], 00:26:01.961 | 99.00th=[ 122], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:26:01.961 | 99.99th=[ 144] 00:26:01.961 bw ( KiB/s): min= 513, max= 1190, per=3.67%, avg=887.11, stdev=169.42, samples=19 00:26:01.961 iops : min= 128, max= 297, avg=221.74, stdev=42.34, samples=19 00:26:01.961 lat (msec) : 10=0.27%, 20=0.44%, 50=15.01%, 100=76.78%, 250=7.50% 00:26:01.961 cpu : usr=36.10%, sys=0.51%, ctx=961, majf=0, minf=9 00:26:01.961 IO depths : 1=2.5%, 2=5.8%, 4=15.6%, 8=65.4%, 16=10.7%, 32=0.0%, >=64=0.0% 00:26:01.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.961 complete : 0=0.0%, 4=91.6%, 8=3.4%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.961 issued rwts: total=2252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.961 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.961 filename0: (groupid=0, jobs=1): err= 0: pid=102409: Wed Nov 20 22:46:00 2024 00:26:01.961 read: IOPS=232, BW=930KiB/s (952kB/s)(9300KiB/10004msec) 00:26:01.961 slat (usec): min=4, max=7095, avg=23.02, stdev=252.51 00:26:01.961 clat (msec): min=3, max=132, avg=68.71, stdev=20.32 00:26:01.961 lat (msec): min=3, max=132, avg=68.73, stdev=20.32 00:26:01.961 clat percentiles (msec): 00:26:01.961 | 1.00th=[ 23], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 55], 00:26:01.961 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 66], 60.00th=[ 70], 00:26:01.961 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 109], 00:26:01.961 | 99.00th=[ 128], 99.50th=[ 133], 99.90th=[ 133], 99.95th=[ 133], 00:26:01.961 | 99.99th=[ 133] 00:26:01.961 bw ( KiB/s): min= 600, max= 1152, per=3.77%, avg=910.47, stdev=154.68, samples=19 00:26:01.961 iops : min= 150, max= 288, avg=227.58, stdev=38.64, samples=19 00:26:01.961 lat (msec) : 4=0.26%, 10=0.26%, 20=0.43%, 50=12.90%, 100=78.02% 00:26:01.961 lat (msec) : 250=8.13% 00:26:01.961 cpu : usr=35.20%, sys=0.67%, ctx=1232, majf=0, minf=9 00:26:01.961 IO depths : 1=1.7%, 2=4.4%, 4=13.9%, 8=68.1%, 16=11.9%, 32=0.0%, >=64=0.0% 00:26:01.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.961 complete : 0=0.0%, 4=91.3%, 8=4.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.961 issued rwts: total=2325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.961 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.961 filename0: (groupid=0, jobs=1): err= 0: pid=102410: Wed Nov 20 22:46:00 2024 00:26:01.961 read: IOPS=230, BW=923KiB/s (945kB/s)(9232KiB/10002msec) 00:26:01.961 slat (usec): min=3, max=8023, avg=18.70, stdev=186.63 00:26:01.961 clat (msec): min=3, max=156, avg=69.21, stdev=20.49 00:26:01.961 lat (msec): min=3, max=156, avg=69.23, stdev=20.49 00:26:01.961 clat percentiles (msec): 00:26:01.961 | 1.00th=[ 26], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 55], 00:26:01.961 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:26:01.961 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 109], 00:26:01.961 | 99.00th=[ 136], 99.50th=[ 146], 99.90th=[ 157], 99.95th=[ 157], 00:26:01.961 | 99.99th=[ 157] 00:26:01.961 bw ( KiB/s): min= 569, max= 1120, per=3.76%, avg=907.16, stdev=158.79, samples=19 00:26:01.961 iops : min= 142, max= 280, avg=226.74, stdev=39.68, samples=19 00:26:01.961 lat (msec) : 4=0.26%, 10=0.43%, 50=14.73%, 100=77.77%, 250=6.80% 00:26:01.961 cpu : usr=36.71%, sys=0.70%, ctx=1151, majf=0, minf=9 00:26:01.961 IO depths : 1=1.7%, 2=4.0%, 4=12.4%, 8=70.3%, 16=11.6%, 32=0.0%, >=64=0.0% 00:26:01.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.961 complete : 0=0.0%, 4=90.8%, 8=4.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.961 issued rwts: total=2308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.961 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.961 filename0: (groupid=0, jobs=1): err= 0: pid=102411: Wed Nov 20 22:46:00 2024 00:26:01.961 read: IOPS=256, BW=1027KiB/s (1052kB/s)(10.1MiB/10031msec) 00:26:01.961 slat (usec): min=4, max=8031, avg=21.73, stdev=249.57 00:26:01.961 clat (msec): min=23, max=127, avg=62.09, stdev=20.81 00:26:01.961 lat (msec): min=23, max=127, avg=62.11, stdev=20.82 00:26:01.961 clat percentiles (msec): 00:26:01.961 | 1.00th=[ 31], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 43], 00:26:01.961 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 60], 60.00th=[ 64], 00:26:01.961 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 90], 95.00th=[ 104], 00:26:01.961 | 99.00th=[ 124], 99.50th=[ 126], 99.90th=[ 128], 99.95th=[ 128], 00:26:01.961 | 99.99th=[ 128] 00:26:01.961 bw ( KiB/s): min= 640, max= 1352, per=4.24%, avg=1023.90, stdev=215.56, samples=20 00:26:01.961 iops : min= 160, max= 338, avg=255.95, stdev=53.92, samples=20 00:26:01.961 lat (msec) : 50=33.50%, 100=60.71%, 250=5.78% 00:26:01.961 cpu : usr=42.82%, sys=0.56%, ctx=1245, majf=0, minf=9 00:26:01.961 IO depths : 1=1.7%, 2=3.8%, 4=11.6%, 8=71.2%, 16=11.6%, 32=0.0%, >=64=0.0% 00:26:01.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.961 complete : 0=0.0%, 4=90.6%, 8=4.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.961 issued rwts: total=2576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.961 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.961 filename1: (groupid=0, jobs=1): err= 0: pid=102412: Wed Nov 20 22:46:00 2024 00:26:01.961 read: IOPS=275, BW=1103KiB/s (1130kB/s)(10.8MiB/10030msec) 00:26:01.961 slat (usec): min=3, max=8030, avg=29.59, stdev=323.10 00:26:01.961 clat (msec): min=23, max=120, avg=57.72, stdev=16.50 00:26:01.961 lat (msec): min=23, max=120, avg=57.75, stdev=16.50 00:26:01.961 clat percentiles (msec): 00:26:01.961 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 42], 00:26:01.961 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 58], 60.00th=[ 61], 00:26:01.961 | 70.00th=[ 64], 80.00th=[ 70], 90.00th=[ 81], 95.00th=[ 88], 00:26:01.961 | 99.00th=[ 104], 99.50th=[ 105], 99.90th=[ 122], 99.95th=[ 122], 00:26:01.961 | 99.99th=[ 122] 00:26:01.961 bw ( KiB/s): min= 816, max= 1328, per=4.56%, avg=1100.40, stdev=147.68, samples=20 00:26:01.961 iops : min= 204, max= 332, avg=275.10, stdev=36.92, samples=20 00:26:01.961 lat (msec) : 50=36.21%, 100=61.80%, 250=1.99% 00:26:01.961 cpu : usr=43.63%, sys=0.61%, ctx=1164, majf=0, minf=9 00:26:01.961 IO depths : 1=1.6%, 2=3.4%, 4=10.7%, 8=72.6%, 16=11.8%, 32=0.0%, >=64=0.0% 00:26:01.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.961 complete : 0=0.0%, 4=90.2%, 8=5.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.961 issued rwts: total=2767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.961 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.961 filename1: (groupid=0, jobs=1): err= 0: pid=102413: Wed Nov 20 22:46:00 2024 00:26:01.961 read: IOPS=226, BW=906KiB/s (927kB/s)(9060KiB/10005msec) 00:26:01.962 slat (usec): min=6, max=4007, avg=14.57, stdev=84.30 00:26:01.962 clat (msec): min=8, max=138, avg=70.57, stdev=18.65 00:26:01.962 lat (msec): min=8, max=138, avg=70.58, stdev=18.65 00:26:01.962 clat percentiles (msec): 00:26:01.962 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 58], 00:26:01.962 | 30.00th=[ 61], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 74], 00:26:01.962 | 70.00th=[ 82], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 105], 00:26:01.962 | 99.00th=[ 132], 99.50th=[ 134], 99.90th=[ 140], 99.95th=[ 140], 00:26:01.962 | 99.99th=[ 140] 00:26:01.962 bw ( KiB/s): min= 640, max= 1063, per=3.72%, avg=897.26, stdev=124.15, samples=19 00:26:01.962 iops : min= 160, max= 265, avg=224.26, stdev=31.01, samples=19 00:26:01.962 lat (msec) : 10=0.71%, 50=10.02%, 100=82.43%, 250=6.84% 00:26:01.962 cpu : usr=32.60%, sys=0.56%, ctx=885, majf=0, minf=9 00:26:01.962 IO depths : 1=2.6%, 2=5.9%, 4=16.4%, 8=64.9%, 16=10.2%, 32=0.0%, >=64=0.0% 00:26:01.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.962 complete : 0=0.0%, 4=91.7%, 8=2.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.962 issued rwts: total=2265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.962 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.962 filename1: (groupid=0, jobs=1): err= 0: pid=102414: Wed Nov 20 22:46:00 2024 00:26:01.962 read: IOPS=240, BW=960KiB/s (983kB/s)(9608KiB/10005msec) 00:26:01.962 slat (usec): min=6, max=8022, avg=22.74, stdev=231.26 00:26:01.962 clat (msec): min=5, max=144, avg=66.45, stdev=18.85 00:26:01.962 lat (msec): min=5, max=144, avg=66.47, stdev=18.85 00:26:01.962 clat percentiles (msec): 00:26:01.962 | 1.00th=[ 29], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 54], 00:26:01.962 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 68], 00:26:01.962 | 70.00th=[ 75], 80.00th=[ 81], 90.00th=[ 88], 95.00th=[ 102], 00:26:01.962 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 144], 99.95th=[ 144], 00:26:01.962 | 99.99th=[ 144] 00:26:01.962 bw ( KiB/s): min= 640, max= 1200, per=3.91%, avg=943.63, stdev=157.28, samples=19 00:26:01.962 iops : min= 160, max= 300, avg=235.89, stdev=39.35, samples=19 00:26:01.962 lat (msec) : 10=0.67%, 50=14.78%, 100=79.06%, 250=5.50% 00:26:01.962 cpu : usr=44.89%, sys=0.58%, ctx=1256, majf=0, minf=9 00:26:01.962 IO depths : 1=2.7%, 2=6.2%, 4=15.6%, 8=65.1%, 16=10.4%, 32=0.0%, >=64=0.0% 00:26:01.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.962 complete : 0=0.0%, 4=91.7%, 8=3.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.962 issued rwts: total=2402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.962 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.962 filename1: (groupid=0, jobs=1): err= 0: pid=102415: Wed Nov 20 22:46:00 2024 00:26:01.962 read: IOPS=282, BW=1132KiB/s (1159kB/s)(11.1MiB/10059msec) 00:26:01.962 slat (usec): min=4, max=8028, avg=18.99, stdev=201.09 00:26:01.962 clat (msec): min=2, max=140, avg=56.27, stdev=20.18 00:26:01.962 lat (msec): min=2, max=140, avg=56.28, stdev=20.18 00:26:01.962 clat percentiles (msec): 00:26:01.962 | 1.00th=[ 4], 5.00th=[ 31], 10.00th=[ 36], 20.00th=[ 41], 00:26:01.962 | 30.00th=[ 46], 40.00th=[ 53], 50.00th=[ 57], 60.00th=[ 61], 00:26:01.962 | 70.00th=[ 65], 80.00th=[ 70], 90.00th=[ 83], 95.00th=[ 91], 00:26:01.962 | 99.00th=[ 112], 99.50th=[ 123], 99.90th=[ 140], 99.95th=[ 140], 00:26:01.962 | 99.99th=[ 140] 00:26:01.962 bw ( KiB/s): min= 811, max= 1757, per=4.69%, avg=1131.15, stdev=211.83, samples=20 00:26:01.962 iops : min= 202, max= 439, avg=282.65, stdev=52.95, samples=20 00:26:01.962 lat (msec) : 4=1.44%, 10=2.49%, 50=33.80%, 100=60.44%, 250=1.83% 00:26:01.962 cpu : usr=41.09%, sys=0.54%, ctx=1311, majf=0, minf=0 00:26:01.962 IO depths : 1=1.3%, 2=2.7%, 4=9.7%, 8=74.2%, 16=12.2%, 32=0.0%, >=64=0.0% 00:26:01.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.962 complete : 0=0.0%, 4=89.9%, 8=5.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.962 issued rwts: total=2846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.962 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.962 filename1: (groupid=0, jobs=1): err= 0: pid=102416: Wed Nov 20 22:46:00 2024 00:26:01.962 read: IOPS=229, BW=918KiB/s (940kB/s)(9184KiB/10004msec) 00:26:01.962 slat (usec): min=3, max=8017, avg=15.60, stdev=167.24 00:26:01.962 clat (msec): min=6, max=153, avg=69.62, stdev=22.98 00:26:01.962 lat (msec): min=6, max=153, avg=69.63, stdev=22.97 00:26:01.962 clat percentiles (msec): 00:26:01.962 | 1.00th=[ 33], 5.00th=[ 38], 10.00th=[ 44], 20.00th=[ 50], 00:26:01.962 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 72], 00:26:01.962 | 70.00th=[ 84], 80.00th=[ 88], 90.00th=[ 101], 95.00th=[ 112], 00:26:01.962 | 99.00th=[ 131], 99.50th=[ 142], 99.90th=[ 155], 99.95th=[ 155], 00:26:01.962 | 99.99th=[ 155] 00:26:01.962 bw ( KiB/s): min= 512, max= 1072, per=3.75%, avg=905.63, stdev=178.47, samples=19 00:26:01.962 iops : min= 128, max= 268, avg=226.37, stdev=44.58, samples=19 00:26:01.962 lat (msec) : 10=0.52%, 20=0.17%, 50=20.38%, 100=68.47%, 250=10.45% 00:26:01.962 cpu : usr=34.42%, sys=0.57%, ctx=900, majf=0, minf=9 00:26:01.962 IO depths : 1=1.7%, 2=3.7%, 4=11.8%, 8=70.7%, 16=12.2%, 32=0.0%, >=64=0.0% 00:26:01.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.962 complete : 0=0.0%, 4=90.7%, 8=5.0%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.962 issued rwts: total=2296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.962 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.962 filename1: (groupid=0, jobs=1): err= 0: pid=102417: Wed Nov 20 22:46:00 2024 00:26:01.962 read: IOPS=240, BW=961KiB/s (984kB/s)(9628KiB/10021msec) 00:26:01.962 slat (usec): min=4, max=8047, avg=30.98, stdev=380.55 00:26:01.962 clat (msec): min=23, max=127, avg=66.45, stdev=18.05 00:26:01.962 lat (msec): min=23, max=127, avg=66.48, stdev=18.05 00:26:01.962 clat percentiles (msec): 00:26:01.962 | 1.00th=[ 29], 5.00th=[ 38], 10.00th=[ 45], 20.00th=[ 51], 00:26:01.962 | 30.00th=[ 57], 40.00th=[ 60], 50.00th=[ 66], 60.00th=[ 70], 00:26:01.962 | 70.00th=[ 77], 80.00th=[ 83], 90.00th=[ 91], 95.00th=[ 96], 00:26:01.962 | 99.00th=[ 112], 99.50th=[ 112], 99.90th=[ 128], 99.95th=[ 128], 00:26:01.962 | 99.99th=[ 128] 00:26:01.962 bw ( KiB/s): min= 656, max= 1168, per=3.96%, avg=956.15, stdev=139.92, samples=20 00:26:01.962 iops : min= 164, max= 292, avg=239.00, stdev=34.96, samples=20 00:26:01.962 lat (msec) : 50=19.86%, 100=76.24%, 250=3.91% 00:26:01.962 cpu : usr=34.63%, sys=0.58%, ctx=1275, majf=0, minf=9 00:26:01.962 IO depths : 1=1.8%, 2=4.5%, 4=13.7%, 8=68.3%, 16=11.7%, 32=0.0%, >=64=0.0% 00:26:01.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.962 complete : 0=0.0%, 4=91.1%, 8=4.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.962 issued rwts: total=2407,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.962 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.962 filename1: (groupid=0, jobs=1): err= 0: pid=102418: Wed Nov 20 22:46:00 2024 00:26:01.962 read: IOPS=264, BW=1058KiB/s (1084kB/s)(10.4MiB/10033msec) 00:26:01.962 slat (usec): min=3, max=8024, avg=14.30, stdev=155.69 00:26:01.962 clat (msec): min=3, max=141, avg=60.33, stdev=19.94 00:26:01.962 lat (msec): min=3, max=141, avg=60.35, stdev=19.94 00:26:01.962 clat percentiles (msec): 00:26:01.962 | 1.00th=[ 5], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 47], 00:26:01.962 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 60], 60.00th=[ 62], 00:26:01.962 | 70.00th=[ 70], 80.00th=[ 77], 90.00th=[ 85], 95.00th=[ 96], 00:26:01.962 | 99.00th=[ 113], 99.50th=[ 115], 99.90th=[ 142], 99.95th=[ 142], 00:26:01.962 | 99.99th=[ 142] 00:26:01.962 bw ( KiB/s): min= 638, max= 1504, per=4.37%, avg=1055.25, stdev=186.77, samples=20 00:26:01.962 iops : min= 159, max= 376, avg=263.65, stdev=46.75, samples=20 00:26:01.962 lat (msec) : 4=0.60%, 10=2.41%, 50=27.92%, 100=66.05%, 250=3.01% 00:26:01.962 cpu : usr=33.50%, sys=0.55%, ctx=907, majf=0, minf=9 00:26:01.962 IO depths : 1=0.7%, 2=1.5%, 4=7.9%, 8=76.3%, 16=13.6%, 32=0.0%, >=64=0.0% 00:26:01.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.962 complete : 0=0.0%, 4=89.8%, 8=6.4%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.962 issued rwts: total=2654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.962 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.962 filename1: (groupid=0, jobs=1): err= 0: pid=102419: Wed Nov 20 22:46:00 2024 00:26:01.962 read: IOPS=263, BW=1054KiB/s (1079kB/s)(10.3MiB/10030msec) 00:26:01.962 slat (usec): min=6, max=8028, avg=18.48, stdev=220.53 00:26:01.962 clat (msec): min=23, max=135, avg=60.54, stdev=17.31 00:26:01.962 lat (msec): min=23, max=135, avg=60.56, stdev=17.31 00:26:01.962 clat percentiles (msec): 00:26:01.962 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 38], 20.00th=[ 47], 00:26:01.962 | 30.00th=[ 51], 40.00th=[ 57], 50.00th=[ 60], 60.00th=[ 61], 00:26:01.962 | 70.00th=[ 69], 80.00th=[ 73], 90.00th=[ 84], 95.00th=[ 92], 00:26:01.962 | 99.00th=[ 116], 99.50th=[ 118], 99.90th=[ 136], 99.95th=[ 136], 00:26:01.962 | 99.99th=[ 136] 00:26:01.962 bw ( KiB/s): min= 744, max= 1272, per=4.37%, avg=1054.40, stdev=141.44, samples=20 00:26:01.962 iops : min= 186, max= 318, avg=263.60, stdev=35.36, samples=20 00:26:01.962 lat (msec) : 50=29.06%, 100=69.32%, 250=1.63% 00:26:01.962 cpu : usr=38.55%, sys=0.73%, ctx=1246, majf=0, minf=9 00:26:01.962 IO depths : 1=1.4%, 2=3.6%, 4=12.4%, 8=70.7%, 16=12.0%, 32=0.0%, >=64=0.0% 00:26:01.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.962 complete : 0=0.0%, 4=90.8%, 8=4.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.962 issued rwts: total=2643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.962 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.962 filename2: (groupid=0, jobs=1): err= 0: pid=102420: Wed Nov 20 22:46:00 2024 00:26:01.962 read: IOPS=276, BW=1106KiB/s (1132kB/s)(10.8MiB/10026msec) 00:26:01.962 slat (usec): min=4, max=5026, avg=21.94, stdev=204.48 00:26:01.962 clat (msec): min=23, max=138, avg=57.75, stdev=20.36 00:26:01.962 lat (msec): min=23, max=138, avg=57.77, stdev=20.36 00:26:01.962 clat percentiles (msec): 00:26:01.962 | 1.00th=[ 31], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 40], 00:26:01.962 | 30.00th=[ 44], 40.00th=[ 48], 50.00th=[ 55], 60.00th=[ 60], 00:26:01.962 | 70.00th=[ 65], 80.00th=[ 77], 90.00th=[ 88], 95.00th=[ 96], 00:26:01.962 | 99.00th=[ 113], 99.50th=[ 121], 99.90th=[ 138], 99.95th=[ 138], 00:26:01.962 | 99.99th=[ 138] 00:26:01.962 bw ( KiB/s): min= 640, max= 1504, per=4.56%, avg=1102.40, stdev=267.49, samples=20 00:26:01.962 iops : min= 160, max= 376, avg=275.60, stdev=66.87, samples=20 00:26:01.962 lat (msec) : 50=43.47%, 100=52.45%, 250=4.08% 00:26:01.962 cpu : usr=43.24%, sys=0.73%, ctx=1447, majf=0, minf=9 00:26:01.962 IO depths : 1=0.8%, 2=1.7%, 4=7.9%, 8=76.8%, 16=12.8%, 32=0.0%, >=64=0.0% 00:26:01.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.963 complete : 0=0.0%, 4=89.5%, 8=6.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.963 issued rwts: total=2772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.963 filename2: (groupid=0, jobs=1): err= 0: pid=102421: Wed Nov 20 22:46:00 2024 00:26:01.963 read: IOPS=242, BW=972KiB/s (995kB/s)(9720KiB/10005msec) 00:26:01.963 slat (usec): min=4, max=4035, avg=17.71, stdev=146.62 00:26:01.963 clat (msec): min=25, max=141, avg=65.77, stdev=19.98 00:26:01.963 lat (msec): min=25, max=141, avg=65.78, stdev=19.98 00:26:01.963 clat percentiles (msec): 00:26:01.963 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 49], 00:26:01.963 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 68], 00:26:01.963 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 91], 95.00th=[ 100], 00:26:01.963 | 99.00th=[ 131], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 142], 00:26:01.963 | 99.99th=[ 142] 00:26:01.963 bw ( KiB/s): min= 568, max= 1344, per=3.98%, avg=962.11, stdev=186.09, samples=19 00:26:01.963 iops : min= 142, max= 336, avg=240.53, stdev=46.52, samples=19 00:26:01.963 lat (msec) : 50=21.69%, 100=73.54%, 250=4.77% 00:26:01.963 cpu : usr=40.33%, sys=0.60%, ctx=1123, majf=0, minf=9 00:26:01.963 IO depths : 1=2.2%, 2=5.1%, 4=14.3%, 8=67.4%, 16=10.9%, 32=0.0%, >=64=0.0% 00:26:01.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.963 complete : 0=0.0%, 4=91.4%, 8=3.6%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.963 issued rwts: total=2430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.963 filename2: (groupid=0, jobs=1): err= 0: pid=102422: Wed Nov 20 22:46:00 2024 00:26:01.963 read: IOPS=281, BW=1127KiB/s (1154kB/s)(11.0MiB/10025msec) 00:26:01.963 slat (usec): min=6, max=4046, avg=14.25, stdev=107.32 00:26:01.963 clat (msec): min=27, max=129, avg=56.69, stdev=17.33 00:26:01.963 lat (msec): min=27, max=129, avg=56.71, stdev=17.33 00:26:01.963 clat percentiles (msec): 00:26:01.963 | 1.00th=[ 30], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 41], 00:26:01.963 | 30.00th=[ 47], 40.00th=[ 49], 50.00th=[ 56], 60.00th=[ 59], 00:26:01.963 | 70.00th=[ 62], 80.00th=[ 70], 90.00th=[ 83], 95.00th=[ 89], 00:26:01.963 | 99.00th=[ 108], 99.50th=[ 114], 99.90th=[ 130], 99.95th=[ 130], 00:26:01.963 | 99.99th=[ 130] 00:26:01.963 bw ( KiB/s): min= 864, max= 1368, per=4.65%, avg=1123.20, stdev=165.25, samples=20 00:26:01.963 iops : min= 216, max= 342, avg=280.80, stdev=41.31, samples=20 00:26:01.963 lat (msec) : 50=43.02%, 100=55.24%, 250=1.74% 00:26:01.963 cpu : usr=43.27%, sys=0.69%, ctx=1017, majf=0, minf=9 00:26:01.963 IO depths : 1=1.1%, 2=2.4%, 4=9.2%, 8=74.9%, 16=12.4%, 32=0.0%, >=64=0.0% 00:26:01.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.963 complete : 0=0.0%, 4=89.9%, 8=5.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.963 issued rwts: total=2824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.963 filename2: (groupid=0, jobs=1): err= 0: pid=102423: Wed Nov 20 22:46:00 2024 00:26:01.963 read: IOPS=257, BW=1028KiB/s (1053kB/s)(10.1MiB/10046msec) 00:26:01.963 slat (usec): min=3, max=8018, avg=15.47, stdev=157.72 00:26:01.963 clat (msec): min=4, max=143, avg=61.99, stdev=20.97 00:26:01.963 lat (msec): min=4, max=143, avg=62.01, stdev=20.97 00:26:01.963 clat percentiles (msec): 00:26:01.963 | 1.00th=[ 8], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 47], 00:26:01.963 | 30.00th=[ 49], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 65], 00:26:01.963 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 87], 95.00th=[ 96], 00:26:01.963 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 144], 99.95th=[ 144], 00:26:01.963 | 99.99th=[ 144] 00:26:01.963 bw ( KiB/s): min= 768, max= 1351, per=4.27%, avg=1030.35, stdev=177.36, samples=20 00:26:01.963 iops : min= 192, max= 337, avg=257.55, stdev=44.27, samples=20 00:26:01.963 lat (msec) : 10=1.78%, 20=0.08%, 50=29.54%, 100=65.16%, 250=3.45% 00:26:01.963 cpu : usr=37.11%, sys=0.51%, ctx=972, majf=0, minf=9 00:26:01.963 IO depths : 1=1.3%, 2=2.6%, 4=10.1%, 8=74.0%, 16=12.0%, 32=0.0%, >=64=0.0% 00:26:01.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.963 complete : 0=0.0%, 4=89.9%, 8=5.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.963 issued rwts: total=2583,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.963 filename2: (groupid=0, jobs=1): err= 0: pid=102424: Wed Nov 20 22:46:00 2024 00:26:01.963 read: IOPS=249, BW=1000KiB/s (1024kB/s)(9.80MiB/10038msec) 00:26:01.963 slat (usec): min=6, max=4026, avg=20.41, stdev=178.64 00:26:01.963 clat (msec): min=22, max=111, avg=63.86, stdev=17.62 00:26:01.963 lat (msec): min=22, max=111, avg=63.88, stdev=17.62 00:26:01.963 clat percentiles (msec): 00:26:01.963 | 1.00th=[ 27], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 48], 00:26:01.963 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 66], 00:26:01.963 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 87], 95.00th=[ 96], 00:26:01.963 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 112], 99.95th=[ 112], 00:26:01.963 | 99.99th=[ 112] 00:26:01.963 bw ( KiB/s): min= 768, max= 1200, per=4.13%, avg=997.20, stdev=123.48, samples=20 00:26:01.963 iops : min= 192, max= 300, avg=249.30, stdev=30.87, samples=20 00:26:01.963 lat (msec) : 50=23.63%, 100=73.26%, 250=3.11% 00:26:01.963 cpu : usr=41.65%, sys=0.47%, ctx=1244, majf=0, minf=9 00:26:01.963 IO depths : 1=0.6%, 2=1.4%, 4=7.2%, 8=77.0%, 16=13.8%, 32=0.0%, >=64=0.0% 00:26:01.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.963 complete : 0=0.0%, 4=89.5%, 8=6.7%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.963 issued rwts: total=2509,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.963 filename2: (groupid=0, jobs=1): err= 0: pid=102425: Wed Nov 20 22:46:00 2024 00:26:01.963 read: IOPS=266, BW=1065KiB/s (1091kB/s)(10.4MiB/10025msec) 00:26:01.963 slat (usec): min=6, max=7994, avg=17.42, stdev=189.46 00:26:01.963 clat (msec): min=13, max=138, avg=59.95, stdev=20.57 00:26:01.963 lat (msec): min=13, max=138, avg=59.97, stdev=20.57 00:26:01.963 clat percentiles (msec): 00:26:01.963 | 1.00th=[ 30], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 42], 00:26:01.963 | 30.00th=[ 47], 40.00th=[ 53], 50.00th=[ 57], 60.00th=[ 61], 00:26:01.963 | 70.00th=[ 69], 80.00th=[ 75], 90.00th=[ 88], 95.00th=[ 101], 00:26:01.963 | 99.00th=[ 121], 99.50th=[ 130], 99.90th=[ 140], 99.95th=[ 140], 00:26:01.963 | 99.99th=[ 140] 00:26:01.963 bw ( KiB/s): min= 640, max= 1504, per=4.40%, avg=1061.60, stdev=255.94, samples=20 00:26:01.963 iops : min= 160, max= 376, avg=265.40, stdev=63.98, samples=20 00:26:01.963 lat (msec) : 20=0.22%, 50=37.38%, 100=57.72%, 250=4.68% 00:26:01.963 cpu : usr=42.39%, sys=0.61%, ctx=1151, majf=0, minf=9 00:26:01.963 IO depths : 1=0.7%, 2=1.6%, 4=7.9%, 8=76.6%, 16=13.3%, 32=0.0%, >=64=0.0% 00:26:01.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.963 complete : 0=0.0%, 4=89.7%, 8=6.1%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.963 issued rwts: total=2670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.963 filename2: (groupid=0, jobs=1): err= 0: pid=102426: Wed Nov 20 22:46:00 2024 00:26:01.963 read: IOPS=234, BW=939KiB/s (961kB/s)(9424KiB/10037msec) 00:26:01.963 slat (usec): min=6, max=8039, avg=20.19, stdev=233.73 00:26:01.963 clat (msec): min=26, max=138, avg=68.02, stdev=20.58 00:26:01.963 lat (msec): min=26, max=138, avg=68.04, stdev=20.57 00:26:01.963 clat percentiles (msec): 00:26:01.963 | 1.00th=[ 35], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 51], 00:26:01.963 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 71], 00:26:01.963 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 99], 95.00th=[ 109], 00:26:01.963 | 99.00th=[ 130], 99.50th=[ 132], 99.90th=[ 138], 99.95th=[ 138], 00:26:01.963 | 99.99th=[ 138] 00:26:01.963 bw ( KiB/s): min= 600, max= 1272, per=3.88%, avg=936.00, stdev=169.55, samples=20 00:26:01.963 iops : min= 150, max= 318, avg=234.00, stdev=42.39, samples=20 00:26:01.963 lat (msec) : 50=19.95%, 100=71.26%, 250=8.79% 00:26:01.963 cpu : usr=32.80%, sys=0.42%, ctx=892, majf=0, minf=9 00:26:01.963 IO depths : 1=1.4%, 2=3.7%, 4=13.1%, 8=69.7%, 16=12.0%, 32=0.0%, >=64=0.0% 00:26:01.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.963 complete : 0=0.0%, 4=91.0%, 8=4.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.963 issued rwts: total=2356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.963 filename2: (groupid=0, jobs=1): err= 0: pid=102427: Wed Nov 20 22:46:00 2024 00:26:01.963 read: IOPS=272, BW=1091KiB/s (1117kB/s)(10.7MiB/10031msec) 00:26:01.963 slat (usec): min=6, max=8023, avg=17.15, stdev=216.59 00:26:01.963 clat (msec): min=13, max=145, avg=58.44, stdev=20.38 00:26:01.963 lat (msec): min=13, max=145, avg=58.46, stdev=20.38 00:26:01.963 clat percentiles (msec): 00:26:01.963 | 1.00th=[ 27], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 43], 00:26:01.963 | 30.00th=[ 47], 40.00th=[ 49], 50.00th=[ 56], 60.00th=[ 60], 00:26:01.963 | 70.00th=[ 63], 80.00th=[ 73], 90.00th=[ 87], 95.00th=[ 101], 00:26:01.963 | 99.00th=[ 120], 99.50th=[ 125], 99.90th=[ 146], 99.95th=[ 146], 00:26:01.963 | 99.99th=[ 146] 00:26:01.963 bw ( KiB/s): min= 640, max= 1424, per=4.50%, avg=1087.85, stdev=221.89, samples=20 00:26:01.963 iops : min= 160, max= 356, avg=271.95, stdev=55.47, samples=20 00:26:01.963 lat (msec) : 20=0.51%, 50=41.89%, 100=52.34%, 250=5.26% 00:26:01.963 cpu : usr=39.60%, sys=0.56%, ctx=1039, majf=0, minf=9 00:26:01.963 IO depths : 1=0.5%, 2=1.6%, 4=8.7%, 8=76.0%, 16=13.1%, 32=0.0%, >=64=0.0% 00:26:01.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.963 complete : 0=0.0%, 4=89.6%, 8=6.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.963 issued rwts: total=2736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.963 00:26:01.963 Run status group 0 (all jobs): 00:26:01.963 READ: bw=23.6MiB/s (24.7MB/s), 900KiB/s-1193KiB/s (922kB/s-1221kB/s), io=237MiB (249MB), run=10002-10059msec 00:26:01.963 22:46:00 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:01.963 22:46:00 -- target/dif.sh@43 -- # local sub 00:26:01.963 22:46:00 -- target/dif.sh@45 -- # for sub in "$@" 00:26:01.963 22:46:00 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:01.963 22:46:00 -- target/dif.sh@36 -- # local sub_id=0 00:26:01.963 22:46:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:01.963 22:46:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.963 22:46:00 -- common/autotest_common.sh@10 -- # set +x 00:26:01.963 22:46:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.963 22:46:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:01.963 22:46:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.964 22:46:00 -- common/autotest_common.sh@10 -- # set +x 00:26:01.964 22:46:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.964 22:46:00 -- target/dif.sh@45 -- # for sub in "$@" 00:26:01.964 22:46:00 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:01.964 22:46:00 -- target/dif.sh@36 -- # local sub_id=1 00:26:01.964 22:46:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:01.964 22:46:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.964 22:46:00 -- common/autotest_common.sh@10 -- # set +x 00:26:01.964 22:46:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.964 22:46:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:01.964 22:46:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.964 22:46:00 -- common/autotest_common.sh@10 -- # set +x 00:26:01.964 22:46:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.964 22:46:00 -- target/dif.sh@45 -- # for sub in "$@" 00:26:01.964 22:46:00 -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:01.964 22:46:00 -- target/dif.sh@36 -- # local sub_id=2 00:26:01.964 22:46:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:01.964 22:46:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.964 22:46:00 -- common/autotest_common.sh@10 -- # set +x 00:26:01.964 22:46:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.964 22:46:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:01.964 22:46:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.964 22:46:00 -- common/autotest_common.sh@10 -- # set +x 00:26:01.964 22:46:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.964 22:46:00 -- target/dif.sh@115 -- # NULL_DIF=1 00:26:01.964 22:46:00 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:01.964 22:46:00 -- target/dif.sh@115 -- # numjobs=2 00:26:01.964 22:46:00 -- target/dif.sh@115 -- # iodepth=8 00:26:01.964 22:46:00 -- target/dif.sh@115 -- # runtime=5 00:26:01.964 22:46:00 -- target/dif.sh@115 -- # files=1 00:26:01.964 22:46:00 -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:01.964 22:46:00 -- target/dif.sh@28 -- # local sub 00:26:01.964 22:46:00 -- target/dif.sh@30 -- # for sub in "$@" 00:26:01.964 22:46:00 -- target/dif.sh@31 -- # create_subsystem 0 00:26:01.964 22:46:00 -- target/dif.sh@18 -- # local sub_id=0 00:26:01.964 22:46:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:01.964 22:46:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.964 22:46:00 -- common/autotest_common.sh@10 -- # set +x 00:26:01.964 bdev_null0 00:26:01.964 22:46:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.964 22:46:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:01.964 22:46:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.964 22:46:00 -- common/autotest_common.sh@10 -- # set +x 00:26:01.964 22:46:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.964 22:46:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:01.964 22:46:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.964 22:46:00 -- common/autotest_common.sh@10 -- # set +x 00:26:01.964 22:46:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.964 22:46:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:01.964 22:46:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.964 22:46:00 -- common/autotest_common.sh@10 -- # set +x 00:26:01.964 [2024-11-20 22:46:00.977341] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.964 22:46:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.964 22:46:00 -- target/dif.sh@30 -- # for sub in "$@" 00:26:01.964 22:46:00 -- target/dif.sh@31 -- # create_subsystem 1 00:26:01.964 22:46:00 -- target/dif.sh@18 -- # local sub_id=1 00:26:01.964 22:46:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:01.964 22:46:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.964 22:46:00 -- common/autotest_common.sh@10 -- # set +x 00:26:01.964 bdev_null1 00:26:01.964 22:46:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.964 22:46:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:01.964 22:46:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.964 22:46:00 -- common/autotest_common.sh@10 -- # set +x 00:26:01.964 22:46:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.964 22:46:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:01.964 22:46:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.964 22:46:00 -- common/autotest_common.sh@10 -- # set +x 00:26:01.964 22:46:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.964 22:46:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:01.964 22:46:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.964 22:46:01 -- common/autotest_common.sh@10 -- # set +x 00:26:01.964 22:46:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.964 22:46:01 -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:01.964 22:46:01 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:01.964 22:46:01 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:01.964 22:46:01 -- nvmf/common.sh@520 -- # config=() 00:26:01.964 22:46:01 -- nvmf/common.sh@520 -- # local subsystem config 00:26:01.964 22:46:01 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:01.964 22:46:01 -- target/dif.sh@82 -- # gen_fio_conf 00:26:01.964 22:46:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:01.964 22:46:01 -- target/dif.sh@54 -- # local file 00:26:01.964 22:46:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:01.964 { 00:26:01.964 "params": { 00:26:01.964 "name": "Nvme$subsystem", 00:26:01.964 "trtype": "$TEST_TRANSPORT", 00:26:01.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:01.964 "adrfam": "ipv4", 00:26:01.964 "trsvcid": "$NVMF_PORT", 00:26:01.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:01.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:01.964 "hdgst": ${hdgst:-false}, 00:26:01.964 "ddgst": ${ddgst:-false} 00:26:01.964 }, 00:26:01.964 "method": "bdev_nvme_attach_controller" 00:26:01.964 } 00:26:01.964 EOF 00:26:01.964 )") 00:26:01.964 22:46:01 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:01.964 22:46:01 -- target/dif.sh@56 -- # cat 00:26:01.964 22:46:01 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:01.964 22:46:01 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:01.964 22:46:01 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:01.964 22:46:01 -- nvmf/common.sh@542 -- # cat 00:26:01.964 22:46:01 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:01.964 22:46:01 -- common/autotest_common.sh@1330 -- # shift 00:26:01.964 22:46:01 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:01.964 22:46:01 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:01.964 22:46:01 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:01.964 22:46:01 -- target/dif.sh@72 -- # (( file <= files )) 00:26:01.964 22:46:01 -- target/dif.sh@73 -- # cat 00:26:01.964 22:46:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:01.964 22:46:01 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:01.964 22:46:01 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:01.964 22:46:01 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:01.964 22:46:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:01.964 { 00:26:01.964 "params": { 00:26:01.964 "name": "Nvme$subsystem", 00:26:01.964 "trtype": "$TEST_TRANSPORT", 00:26:01.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:01.964 "adrfam": "ipv4", 00:26:01.964 "trsvcid": "$NVMF_PORT", 00:26:01.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:01.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:01.964 "hdgst": ${hdgst:-false}, 00:26:01.964 "ddgst": ${ddgst:-false} 00:26:01.964 }, 00:26:01.964 "method": "bdev_nvme_attach_controller" 00:26:01.964 } 00:26:01.964 EOF 00:26:01.964 )") 00:26:01.964 22:46:01 -- target/dif.sh@72 -- # (( file++ )) 00:26:01.964 22:46:01 -- target/dif.sh@72 -- # (( file <= files )) 00:26:01.964 22:46:01 -- nvmf/common.sh@542 -- # cat 00:26:01.964 22:46:01 -- nvmf/common.sh@544 -- # jq . 00:26:01.964 22:46:01 -- nvmf/common.sh@545 -- # IFS=, 00:26:01.964 22:46:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:01.964 "params": { 00:26:01.964 "name": "Nvme0", 00:26:01.964 "trtype": "tcp", 00:26:01.964 "traddr": "10.0.0.2", 00:26:01.964 "adrfam": "ipv4", 00:26:01.964 "trsvcid": "4420", 00:26:01.964 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:01.964 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:01.964 "hdgst": false, 00:26:01.964 "ddgst": false 00:26:01.964 }, 00:26:01.964 "method": "bdev_nvme_attach_controller" 00:26:01.964 },{ 00:26:01.964 "params": { 00:26:01.964 "name": "Nvme1", 00:26:01.964 "trtype": "tcp", 00:26:01.964 "traddr": "10.0.0.2", 00:26:01.964 "adrfam": "ipv4", 00:26:01.964 "trsvcid": "4420", 00:26:01.964 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:01.964 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:01.964 "hdgst": false, 00:26:01.964 "ddgst": false 00:26:01.964 }, 00:26:01.964 "method": "bdev_nvme_attach_controller" 00:26:01.964 }' 00:26:01.964 22:46:01 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:01.964 22:46:01 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:01.964 22:46:01 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:01.964 22:46:01 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:01.964 22:46:01 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:01.964 22:46:01 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:01.964 22:46:01 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:01.964 22:46:01 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:01.964 22:46:01 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:01.964 22:46:01 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:01.964 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:01.964 ... 00:26:01.964 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:01.964 ... 00:26:01.965 fio-3.35 00:26:01.965 Starting 4 threads 00:26:01.965 [2024-11-20 22:46:01.711433] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:01.965 [2024-11-20 22:46:01.711953] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:06.253 00:26:06.253 filename0: (groupid=0, jobs=1): err= 0: pid=102563: Wed Nov 20 22:46:06 2024 00:26:06.253 read: IOPS=2162, BW=16.9MiB/s (17.7MB/s)(84.5MiB/5001msec) 00:26:06.253 slat (nsec): min=6077, max=91412, avg=15480.86, stdev=8244.50 00:26:06.253 clat (usec): min=2310, max=5529, avg=3627.43, stdev=161.59 00:26:06.253 lat (usec): min=2328, max=5541, avg=3642.91, stdev=161.74 00:26:06.253 clat percentiles (usec): 00:26:06.253 | 1.00th=[ 3294], 5.00th=[ 3458], 10.00th=[ 3490], 20.00th=[ 3556], 00:26:06.253 | 30.00th=[ 3556], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3654], 00:26:06.253 | 70.00th=[ 3654], 80.00th=[ 3720], 90.00th=[ 3752], 95.00th=[ 3851], 00:26:06.253 | 99.00th=[ 4228], 99.50th=[ 4424], 99.90th=[ 4686], 99.95th=[ 4883], 00:26:06.254 | 99.99th=[ 5407] 00:26:06.254 bw ( KiB/s): min=17152, max=17424, per=24.99%, avg=17312.22, stdev=103.69, samples=9 00:26:06.254 iops : min= 2144, max= 2178, avg=2164.00, stdev=13.00, samples=9 00:26:06.254 lat (msec) : 4=98.19%, 10=1.81% 00:26:06.254 cpu : usr=95.02%, sys=3.70%, ctx=3, majf=0, minf=9 00:26:06.254 IO depths : 1=5.4%, 2=25.0%, 4=50.0%, 8=19.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:06.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.254 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.254 issued rwts: total=10816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.254 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:06.254 filename0: (groupid=0, jobs=1): err= 0: pid=102564: Wed Nov 20 22:46:06 2024 00:26:06.254 read: IOPS=2169, BW=16.9MiB/s (17.8MB/s)(84.8MiB/5003msec) 00:26:06.254 slat (nsec): min=6009, max=81188, avg=8894.92, stdev=5267.43 00:26:06.254 clat (usec): min=1208, max=6352, avg=3643.21, stdev=235.03 00:26:06.254 lat (usec): min=1215, max=6371, avg=3652.11, stdev=234.77 00:26:06.254 clat percentiles (usec): 00:26:06.254 | 1.00th=[ 2737], 5.00th=[ 3490], 10.00th=[ 3523], 20.00th=[ 3556], 00:26:06.254 | 30.00th=[ 3589], 40.00th=[ 3621], 50.00th=[ 3621], 60.00th=[ 3654], 00:26:06.254 | 70.00th=[ 3687], 80.00th=[ 3720], 90.00th=[ 3785], 95.00th=[ 3851], 00:26:06.254 | 99.00th=[ 4424], 99.50th=[ 4686], 99.90th=[ 5342], 99.95th=[ 5407], 00:26:06.254 | 99.99th=[ 6325] 00:26:06.254 bw ( KiB/s): min=17280, max=17536, per=25.09%, avg=17376.00, stdev=90.16, samples=9 00:26:06.254 iops : min= 2160, max= 2192, avg=2172.00, stdev=11.27, samples=9 00:26:06.254 lat (msec) : 2=0.30%, 4=98.00%, 10=1.70% 00:26:06.254 cpu : usr=95.42%, sys=3.42%, ctx=249, majf=0, minf=0 00:26:06.254 IO depths : 1=8.4%, 2=23.6%, 4=51.3%, 8=16.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:06.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.254 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.254 issued rwts: total=10854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.254 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:06.254 filename1: (groupid=0, jobs=1): err= 0: pid=102565: Wed Nov 20 22:46:06 2024 00:26:06.254 read: IOPS=2165, BW=16.9MiB/s (17.7MB/s)(84.6MiB/5003msec) 00:26:06.254 slat (usec): min=4, max=485, avg=11.11, stdev= 9.37 00:26:06.254 clat (usec): min=1208, max=6233, avg=3644.75, stdev=308.60 00:26:06.254 lat (usec): min=1215, max=6240, avg=3655.87, stdev=307.94 00:26:06.254 clat percentiles (usec): 00:26:06.254 | 1.00th=[ 2180], 5.00th=[ 3458], 10.00th=[ 3523], 20.00th=[ 3556], 00:26:06.254 | 30.00th=[ 3589], 40.00th=[ 3621], 50.00th=[ 3621], 60.00th=[ 3654], 00:26:06.254 | 70.00th=[ 3687], 80.00th=[ 3720], 90.00th=[ 3785], 95.00th=[ 3916], 00:26:06.254 | 99.00th=[ 5080], 99.50th=[ 5276], 99.90th=[ 5997], 99.95th=[ 6063], 00:26:06.254 | 99.99th=[ 6128] 00:26:06.254 bw ( KiB/s): min=17152, max=17456, per=25.02%, avg=17328.22, stdev=105.30, samples=9 00:26:06.254 iops : min= 2144, max= 2182, avg=2166.00, stdev=13.15, samples=9 00:26:06.254 lat (msec) : 2=0.27%, 4=96.27%, 10=3.46% 00:26:06.254 cpu : usr=95.12%, sys=3.40%, ctx=96, majf=0, minf=9 00:26:06.254 IO depths : 1=7.3%, 2=20.1%, 4=54.9%, 8=17.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:06.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.254 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.254 issued rwts: total=10832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.254 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:06.254 filename1: (groupid=0, jobs=1): err= 0: pid=102566: Wed Nov 20 22:46:06 2024 00:26:06.254 read: IOPS=2162, BW=16.9MiB/s (17.7MB/s)(84.5MiB/5002msec) 00:26:06.254 slat (nsec): min=3489, max=83107, avg=15139.23, stdev=8105.43 00:26:06.254 clat (usec): min=1098, max=6632, avg=3629.01, stdev=266.37 00:26:06.254 lat (usec): min=1109, max=6644, avg=3644.15, stdev=266.18 00:26:06.254 clat percentiles (usec): 00:26:06.254 | 1.00th=[ 2769], 5.00th=[ 3458], 10.00th=[ 3490], 20.00th=[ 3556], 00:26:06.254 | 30.00th=[ 3556], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3654], 00:26:06.254 | 70.00th=[ 3654], 80.00th=[ 3687], 90.00th=[ 3785], 95.00th=[ 3851], 00:26:06.254 | 99.00th=[ 4621], 99.50th=[ 5342], 99.90th=[ 5997], 99.95th=[ 6063], 00:26:06.254 | 99.99th=[ 6390] 00:26:06.254 bw ( KiB/s): min=17152, max=17408, per=24.99%, avg=17308.44, stdev=102.38, samples=9 00:26:06.254 iops : min= 2144, max= 2176, avg=2163.56, stdev=12.80, samples=9 00:26:06.254 lat (msec) : 2=0.08%, 4=97.47%, 10=2.45% 00:26:06.254 cpu : usr=95.12%, sys=3.50%, ctx=74, majf=0, minf=9 00:26:06.254 IO depths : 1=4.9%, 2=25.0%, 4=50.0%, 8=20.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:06.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.254 complete : 0=0.0%, 4=89.6%, 8=10.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.254 issued rwts: total=10816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.254 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:06.254 00:26:06.254 Run status group 0 (all jobs): 00:26:06.254 READ: bw=67.6MiB/s (70.9MB/s), 16.9MiB/s-16.9MiB/s (17.7MB/s-17.8MB/s), io=338MiB (355MB), run=5001-5003msec 00:26:06.513 22:46:07 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:06.513 22:46:07 -- target/dif.sh@43 -- # local sub 00:26:06.513 22:46:07 -- target/dif.sh@45 -- # for sub in "$@" 00:26:06.513 22:46:07 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:06.513 22:46:07 -- target/dif.sh@36 -- # local sub_id=0 00:26:06.513 22:46:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:06.513 22:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.513 22:46:07 -- common/autotest_common.sh@10 -- # set +x 00:26:06.513 22:46:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.513 22:46:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:06.513 22:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.513 22:46:07 -- common/autotest_common.sh@10 -- # set +x 00:26:06.513 22:46:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.513 22:46:07 -- target/dif.sh@45 -- # for sub in "$@" 00:26:06.513 22:46:07 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:06.513 22:46:07 -- target/dif.sh@36 -- # local sub_id=1 00:26:06.513 22:46:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:06.513 22:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.513 22:46:07 -- common/autotest_common.sh@10 -- # set +x 00:26:06.513 22:46:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.513 22:46:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:06.513 22:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.513 22:46:07 -- common/autotest_common.sh@10 -- # set +x 00:26:06.513 22:46:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.513 00:26:06.513 real 0m23.613s 00:26:06.513 user 2m8.148s 00:26:06.513 sys 0m3.587s 00:26:06.513 ************************************ 00:26:06.513 END TEST fio_dif_rand_params 00:26:06.513 ************************************ 00:26:06.513 22:46:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:06.513 22:46:07 -- common/autotest_common.sh@10 -- # set +x 00:26:06.513 22:46:07 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:06.513 22:46:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:06.513 22:46:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:06.513 22:46:07 -- common/autotest_common.sh@10 -- # set +x 00:26:06.513 ************************************ 00:26:06.513 START TEST fio_dif_digest 00:26:06.513 ************************************ 00:26:06.513 22:46:07 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:26:06.513 22:46:07 -- target/dif.sh@123 -- # local NULL_DIF 00:26:06.513 22:46:07 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:06.513 22:46:07 -- target/dif.sh@125 -- # local hdgst ddgst 00:26:06.513 22:46:07 -- target/dif.sh@127 -- # NULL_DIF=3 00:26:06.513 22:46:07 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:06.513 22:46:07 -- target/dif.sh@127 -- # numjobs=3 00:26:06.513 22:46:07 -- target/dif.sh@127 -- # iodepth=3 00:26:06.513 22:46:07 -- target/dif.sh@127 -- # runtime=10 00:26:06.513 22:46:07 -- target/dif.sh@128 -- # hdgst=true 00:26:06.513 22:46:07 -- target/dif.sh@128 -- # ddgst=true 00:26:06.513 22:46:07 -- target/dif.sh@130 -- # create_subsystems 0 00:26:06.513 22:46:07 -- target/dif.sh@28 -- # local sub 00:26:06.513 22:46:07 -- target/dif.sh@30 -- # for sub in "$@" 00:26:06.513 22:46:07 -- target/dif.sh@31 -- # create_subsystem 0 00:26:06.513 22:46:07 -- target/dif.sh@18 -- # local sub_id=0 00:26:06.513 22:46:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:06.513 22:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.513 22:46:07 -- common/autotest_common.sh@10 -- # set +x 00:26:06.513 bdev_null0 00:26:06.513 22:46:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.513 22:46:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:06.513 22:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.513 22:46:07 -- common/autotest_common.sh@10 -- # set +x 00:26:06.513 22:46:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.513 22:46:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:06.513 22:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.513 22:46:07 -- common/autotest_common.sh@10 -- # set +x 00:26:06.513 22:46:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.513 22:46:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:06.514 22:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.514 22:46:07 -- common/autotest_common.sh@10 -- # set +x 00:26:06.514 [2024-11-20 22:46:07.185760] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.514 22:46:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.514 22:46:07 -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:06.514 22:46:07 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:06.514 22:46:07 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:06.514 22:46:07 -- nvmf/common.sh@520 -- # config=() 00:26:06.514 22:46:07 -- nvmf/common.sh@520 -- # local subsystem config 00:26:06.514 22:46:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:06.514 22:46:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:06.514 { 00:26:06.514 "params": { 00:26:06.514 "name": "Nvme$subsystem", 00:26:06.514 "trtype": "$TEST_TRANSPORT", 00:26:06.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:06.514 "adrfam": "ipv4", 00:26:06.514 "trsvcid": "$NVMF_PORT", 00:26:06.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:06.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:06.514 "hdgst": ${hdgst:-false}, 00:26:06.514 "ddgst": ${ddgst:-false} 00:26:06.514 }, 00:26:06.514 "method": "bdev_nvme_attach_controller" 00:26:06.514 } 00:26:06.514 EOF 00:26:06.514 )") 00:26:06.514 22:46:07 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:06.514 22:46:07 -- target/dif.sh@82 -- # gen_fio_conf 00:26:06.514 22:46:07 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:06.514 22:46:07 -- target/dif.sh@54 -- # local file 00:26:06.514 22:46:07 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:06.514 22:46:07 -- target/dif.sh@56 -- # cat 00:26:06.514 22:46:07 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:06.514 22:46:07 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:06.514 22:46:07 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:06.514 22:46:07 -- common/autotest_common.sh@1330 -- # shift 00:26:06.514 22:46:07 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:06.514 22:46:07 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:06.514 22:46:07 -- nvmf/common.sh@542 -- # cat 00:26:06.514 22:46:07 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:06.514 22:46:07 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:06.514 22:46:07 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:06.514 22:46:07 -- target/dif.sh@72 -- # (( file <= files )) 00:26:06.514 22:46:07 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:06.514 22:46:07 -- nvmf/common.sh@544 -- # jq . 00:26:06.514 22:46:07 -- nvmf/common.sh@545 -- # IFS=, 00:26:06.514 22:46:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:06.514 "params": { 00:26:06.514 "name": "Nvme0", 00:26:06.514 "trtype": "tcp", 00:26:06.514 "traddr": "10.0.0.2", 00:26:06.514 "adrfam": "ipv4", 00:26:06.514 "trsvcid": "4420", 00:26:06.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:06.514 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:06.514 "hdgst": true, 00:26:06.514 "ddgst": true 00:26:06.514 }, 00:26:06.514 "method": "bdev_nvme_attach_controller" 00:26:06.514 }' 00:26:06.514 22:46:07 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:06.514 22:46:07 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:06.514 22:46:07 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:06.514 22:46:07 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:06.514 22:46:07 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:06.514 22:46:07 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:06.773 22:46:07 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:06.773 22:46:07 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:06.773 22:46:07 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:06.773 22:46:07 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:06.773 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:06.773 ... 00:26:06.773 fio-3.35 00:26:06.773 Starting 3 threads 00:26:07.341 [2024-11-20 22:46:07.781190] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:07.341 [2024-11-20 22:46:07.781297] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:17.316 00:26:17.316 filename0: (groupid=0, jobs=1): err= 0: pid=102672: Wed Nov 20 22:46:17 2024 00:26:17.316 read: IOPS=202, BW=25.3MiB/s (26.6MB/s)(254MiB/10007msec) 00:26:17.316 slat (nsec): min=6257, max=63772, avg=14719.05, stdev=5981.72 00:26:17.316 clat (usec): min=6808, max=53067, avg=14777.38, stdev=12861.36 00:26:17.316 lat (usec): min=6828, max=53077, avg=14792.10, stdev=12861.32 00:26:17.316 clat percentiles (usec): 00:26:17.316 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9765], 00:26:17.316 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:26:17.316 | 70.00th=[10683], 80.00th=[10945], 90.00th=[50070], 95.00th=[51119], 00:26:17.316 | 99.00th=[52167], 99.50th=[52167], 99.90th=[52691], 99.95th=[52691], 00:26:17.316 | 99.99th=[53216] 00:26:17.316 bw ( KiB/s): min=15104, max=34491, per=27.15%, avg=26152.79, stdev=4900.95, samples=19 00:26:17.316 iops : min= 118, max= 269, avg=204.26, stdev=38.21, samples=19 00:26:17.316 lat (msec) : 10=30.01%, 20=58.75%, 50=1.48%, 100=9.76% 00:26:17.316 cpu : usr=94.82%, sys=3.94%, ctx=5, majf=0, minf=9 00:26:17.316 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.316 issued rwts: total=2029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.316 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:17.316 filename0: (groupid=0, jobs=1): err= 0: pid=102673: Wed Nov 20 22:46:17 2024 00:26:17.316 read: IOPS=249, BW=31.1MiB/s (32.7MB/s)(312MiB/10004msec) 00:26:17.316 slat (nsec): min=6388, max=61637, avg=17482.35, stdev=6189.53 00:26:17.316 clat (usec): min=4234, max=17568, avg=12015.83, stdev=2794.01 00:26:17.316 lat (usec): min=4252, max=17589, avg=12033.31, stdev=2794.85 00:26:17.316 clat percentiles (usec): 00:26:17.316 | 1.00th=[ 7898], 5.00th=[ 8160], 10.00th=[ 8356], 20.00th=[ 8717], 00:26:17.316 | 30.00th=[ 9241], 40.00th=[10683], 50.00th=[13435], 60.00th=[13960], 00:26:17.316 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15139], 95.00th=[15401], 00:26:17.316 | 99.00th=[16057], 99.50th=[16319], 99.90th=[17171], 99.95th=[17171], 00:26:17.316 | 99.99th=[17695] 00:26:17.316 bw ( KiB/s): min=27392, max=37120, per=32.99%, avg=31784.32, stdev=2289.67, samples=19 00:26:17.316 iops : min= 214, max= 290, avg=248.26, stdev=17.96, samples=19 00:26:17.316 lat (msec) : 10=37.06%, 20=62.94% 00:26:17.316 cpu : usr=94.68%, sys=3.76%, ctx=87, majf=0, minf=11 00:26:17.316 IO depths : 1=4.8%, 2=95.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.316 issued rwts: total=2493,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.316 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:17.316 filename0: (groupid=0, jobs=1): err= 0: pid=102674: Wed Nov 20 22:46:17 2024 00:26:17.316 read: IOPS=302, BW=37.8MiB/s (39.6MB/s)(380MiB/10044msec) 00:26:17.316 slat (nsec): min=6144, max=61918, avg=14722.58, stdev=6378.44 00:26:17.316 clat (usec): min=5678, max=51882, avg=9888.97, stdev=2619.06 00:26:17.316 lat (usec): min=5689, max=51894, avg=9903.70, stdev=2618.45 00:26:17.316 clat percentiles (usec): 00:26:17.316 | 1.00th=[ 6456], 5.00th=[ 6849], 10.00th=[ 7111], 20.00th=[ 7504], 00:26:17.316 | 30.00th=[ 7832], 40.00th=[ 9110], 50.00th=[10552], 60.00th=[10945], 00:26:17.316 | 70.00th=[11338], 80.00th=[11731], 90.00th=[12256], 95.00th=[12780], 00:26:17.316 | 99.00th=[13698], 99.50th=[14091], 99.90th=[45351], 99.95th=[51643], 00:26:17.316 | 99.99th=[51643] 00:26:17.316 bw ( KiB/s): min=31232, max=43776, per=40.32%, avg=38844.25, stdev=2812.81, samples=20 00:26:17.316 iops : min= 244, max= 342, avg=303.45, stdev=21.99, samples=20 00:26:17.316 lat (msec) : 10=45.18%, 20=54.66%, 50=0.07%, 100=0.10% 00:26:17.316 cpu : usr=93.51%, sys=4.66%, ctx=19, majf=0, minf=9 00:26:17.316 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.316 issued rwts: total=3037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.316 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:17.316 00:26:17.316 Run status group 0 (all jobs): 00:26:17.316 READ: bw=94.1MiB/s (98.6MB/s), 25.3MiB/s-37.8MiB/s (26.6MB/s-39.6MB/s), io=945MiB (991MB), run=10004-10044msec 00:26:17.575 22:46:18 -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:17.575 22:46:18 -- target/dif.sh@43 -- # local sub 00:26:17.575 22:46:18 -- target/dif.sh@45 -- # for sub in "$@" 00:26:17.575 22:46:18 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:17.575 22:46:18 -- target/dif.sh@36 -- # local sub_id=0 00:26:17.575 22:46:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:17.575 22:46:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.575 22:46:18 -- common/autotest_common.sh@10 -- # set +x 00:26:17.575 22:46:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.575 22:46:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:17.575 22:46:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.575 22:46:18 -- common/autotest_common.sh@10 -- # set +x 00:26:17.575 22:46:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.575 00:26:17.575 real 0m11.006s 00:26:17.575 user 0m28.983s 00:26:17.575 sys 0m1.515s 00:26:17.575 22:46:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:17.575 22:46:18 -- common/autotest_common.sh@10 -- # set +x 00:26:17.575 ************************************ 00:26:17.575 END TEST fio_dif_digest 00:26:17.575 ************************************ 00:26:17.575 22:46:18 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:17.575 22:46:18 -- target/dif.sh@147 -- # nvmftestfini 00:26:17.575 22:46:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:17.575 22:46:18 -- nvmf/common.sh@116 -- # sync 00:26:17.575 22:46:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:17.575 22:46:18 -- nvmf/common.sh@119 -- # set +e 00:26:17.575 22:46:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:17.575 22:46:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:17.575 rmmod nvme_tcp 00:26:17.575 rmmod nvme_fabrics 00:26:17.575 rmmod nvme_keyring 00:26:17.575 22:46:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:17.575 22:46:18 -- nvmf/common.sh@123 -- # set -e 00:26:17.575 22:46:18 -- nvmf/common.sh@124 -- # return 0 00:26:17.575 22:46:18 -- nvmf/common.sh@477 -- # '[' -n 101901 ']' 00:26:17.575 22:46:18 -- nvmf/common.sh@478 -- # killprocess 101901 00:26:17.575 22:46:18 -- common/autotest_common.sh@936 -- # '[' -z 101901 ']' 00:26:17.575 22:46:18 -- common/autotest_common.sh@940 -- # kill -0 101901 00:26:17.833 22:46:18 -- common/autotest_common.sh@941 -- # uname 00:26:17.833 22:46:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:17.833 22:46:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101901 00:26:17.833 22:46:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:17.833 22:46:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:17.833 killing process with pid 101901 00:26:17.833 22:46:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101901' 00:26:17.833 22:46:18 -- common/autotest_common.sh@955 -- # kill 101901 00:26:17.833 22:46:18 -- common/autotest_common.sh@960 -- # wait 101901 00:26:18.092 22:46:18 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:18.092 22:46:18 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:18.350 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:18.350 Waiting for block devices as requested 00:26:18.350 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:18.608 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:18.608 22:46:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:18.608 22:46:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:18.608 22:46:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:18.608 22:46:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:18.608 22:46:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.608 22:46:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:18.608 22:46:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.608 22:46:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:18.608 00:26:18.608 real 1m0.097s 00:26:18.608 user 3m51.066s 00:26:18.608 sys 0m14.568s 00:26:18.608 22:46:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:18.608 22:46:19 -- common/autotest_common.sh@10 -- # set +x 00:26:18.608 ************************************ 00:26:18.608 END TEST nvmf_dif 00:26:18.608 ************************************ 00:26:18.608 22:46:19 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:18.608 22:46:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:18.608 22:46:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:18.608 22:46:19 -- common/autotest_common.sh@10 -- # set +x 00:26:18.608 ************************************ 00:26:18.608 START TEST nvmf_abort_qd_sizes 00:26:18.608 ************************************ 00:26:18.608 22:46:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:18.867 * Looking for test storage... 00:26:18.867 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:18.867 22:46:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:18.867 22:46:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:18.867 22:46:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:18.867 22:46:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:18.867 22:46:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:18.867 22:46:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:18.867 22:46:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:18.867 22:46:19 -- scripts/common.sh@335 -- # IFS=.-: 00:26:18.867 22:46:19 -- scripts/common.sh@335 -- # read -ra ver1 00:26:18.867 22:46:19 -- scripts/common.sh@336 -- # IFS=.-: 00:26:18.867 22:46:19 -- scripts/common.sh@336 -- # read -ra ver2 00:26:18.867 22:46:19 -- scripts/common.sh@337 -- # local 'op=<' 00:26:18.867 22:46:19 -- scripts/common.sh@339 -- # ver1_l=2 00:26:18.867 22:46:19 -- scripts/common.sh@340 -- # ver2_l=1 00:26:18.867 22:46:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:18.867 22:46:19 -- scripts/common.sh@343 -- # case "$op" in 00:26:18.867 22:46:19 -- scripts/common.sh@344 -- # : 1 00:26:18.867 22:46:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:18.867 22:46:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:18.867 22:46:19 -- scripts/common.sh@364 -- # decimal 1 00:26:18.867 22:46:19 -- scripts/common.sh@352 -- # local d=1 00:26:18.867 22:46:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:18.867 22:46:19 -- scripts/common.sh@354 -- # echo 1 00:26:18.867 22:46:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:18.867 22:46:19 -- scripts/common.sh@365 -- # decimal 2 00:26:18.867 22:46:19 -- scripts/common.sh@352 -- # local d=2 00:26:18.867 22:46:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:18.867 22:46:19 -- scripts/common.sh@354 -- # echo 2 00:26:18.867 22:46:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:18.867 22:46:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:18.867 22:46:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:18.867 22:46:19 -- scripts/common.sh@367 -- # return 0 00:26:18.867 22:46:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:18.867 22:46:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:18.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.867 --rc genhtml_branch_coverage=1 00:26:18.867 --rc genhtml_function_coverage=1 00:26:18.867 --rc genhtml_legend=1 00:26:18.867 --rc geninfo_all_blocks=1 00:26:18.867 --rc geninfo_unexecuted_blocks=1 00:26:18.867 00:26:18.867 ' 00:26:18.867 22:46:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:18.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.867 --rc genhtml_branch_coverage=1 00:26:18.867 --rc genhtml_function_coverage=1 00:26:18.867 --rc genhtml_legend=1 00:26:18.867 --rc geninfo_all_blocks=1 00:26:18.867 --rc geninfo_unexecuted_blocks=1 00:26:18.867 00:26:18.867 ' 00:26:18.868 22:46:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:18.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.868 --rc genhtml_branch_coverage=1 00:26:18.868 --rc genhtml_function_coverage=1 00:26:18.868 --rc genhtml_legend=1 00:26:18.868 --rc geninfo_all_blocks=1 00:26:18.868 --rc geninfo_unexecuted_blocks=1 00:26:18.868 00:26:18.868 ' 00:26:18.868 22:46:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:18.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.868 --rc genhtml_branch_coverage=1 00:26:18.868 --rc genhtml_function_coverage=1 00:26:18.868 --rc genhtml_legend=1 00:26:18.868 --rc geninfo_all_blocks=1 00:26:18.868 --rc geninfo_unexecuted_blocks=1 00:26:18.868 00:26:18.868 ' 00:26:18.868 22:46:19 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:18.868 22:46:19 -- nvmf/common.sh@7 -- # uname -s 00:26:18.868 22:46:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:18.868 22:46:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:18.868 22:46:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:18.868 22:46:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:18.868 22:46:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:18.868 22:46:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:18.868 22:46:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:18.868 22:46:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:18.868 22:46:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:18.868 22:46:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:18.868 22:46:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 00:26:18.868 22:46:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=c4482f2d-46a3-481d-af07-c04f1b86df27 00:26:18.868 22:46:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:18.868 22:46:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:18.868 22:46:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:18.868 22:46:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:18.868 22:46:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:18.868 22:46:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:18.868 22:46:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:18.868 22:46:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.868 22:46:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.868 22:46:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.868 22:46:19 -- paths/export.sh@5 -- # export PATH 00:26:18.868 22:46:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.868 22:46:19 -- nvmf/common.sh@46 -- # : 0 00:26:18.868 22:46:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:18.868 22:46:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:18.868 22:46:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:18.868 22:46:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:18.868 22:46:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:18.868 22:46:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:18.868 22:46:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:18.868 22:46:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:18.868 22:46:19 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:26:18.868 22:46:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:18.868 22:46:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:18.868 22:46:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:18.868 22:46:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:18.868 22:46:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:18.868 22:46:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.868 22:46:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:18.868 22:46:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.868 22:46:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:18.868 22:46:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:18.868 22:46:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:18.868 22:46:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:18.868 22:46:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:18.868 22:46:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:18.868 22:46:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:18.868 22:46:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:18.868 22:46:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:18.868 22:46:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:18.868 22:46:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:18.868 22:46:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:18.868 22:46:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:18.868 22:46:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:18.868 22:46:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:18.868 22:46:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:18.868 22:46:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:18.868 22:46:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:18.868 22:46:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:18.868 22:46:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:18.868 Cannot find device "nvmf_tgt_br" 00:26:18.868 22:46:19 -- nvmf/common.sh@154 -- # true 00:26:18.868 22:46:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:18.868 Cannot find device "nvmf_tgt_br2" 00:26:18.868 22:46:19 -- nvmf/common.sh@155 -- # true 00:26:18.868 22:46:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:18.868 22:46:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:18.868 Cannot find device "nvmf_tgt_br" 00:26:18.868 22:46:19 -- nvmf/common.sh@157 -- # true 00:26:18.868 22:46:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:18.868 Cannot find device "nvmf_tgt_br2" 00:26:18.868 22:46:19 -- nvmf/common.sh@158 -- # true 00:26:18.868 22:46:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:19.127 22:46:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:19.127 22:46:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:19.127 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:19.127 22:46:19 -- nvmf/common.sh@161 -- # true 00:26:19.127 22:46:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:19.127 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:19.127 22:46:19 -- nvmf/common.sh@162 -- # true 00:26:19.127 22:46:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:19.127 22:46:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:19.127 22:46:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:19.127 22:46:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:19.127 22:46:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:19.127 22:46:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:19.127 22:46:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:19.127 22:46:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:19.127 22:46:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:19.127 22:46:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:19.127 22:46:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:19.127 22:46:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:19.127 22:46:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:19.127 22:46:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:19.127 22:46:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:19.127 22:46:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:19.127 22:46:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:19.127 22:46:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:19.127 22:46:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:19.127 22:46:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:19.127 22:46:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:19.127 22:46:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:19.127 22:46:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:19.127 22:46:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:19.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:19.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:26:19.127 00:26:19.127 --- 10.0.0.2 ping statistics --- 00:26:19.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.127 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:26:19.127 22:46:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:19.127 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:19.127 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:26:19.127 00:26:19.127 --- 10.0.0.3 ping statistics --- 00:26:19.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.127 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:26:19.127 22:46:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:19.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:19.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:26:19.127 00:26:19.127 --- 10.0.0.1 ping statistics --- 00:26:19.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.127 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:26:19.127 22:46:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:19.127 22:46:19 -- nvmf/common.sh@421 -- # return 0 00:26:19.127 22:46:19 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:19.127 22:46:19 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:20.063 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:20.063 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:20.063 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:26:20.063 22:46:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:20.063 22:46:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:20.063 22:46:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:20.063 22:46:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:20.063 22:46:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:20.063 22:46:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:20.063 22:46:20 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:26:20.063 22:46:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:20.063 22:46:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:20.063 22:46:20 -- common/autotest_common.sh@10 -- # set +x 00:26:20.322 22:46:20 -- nvmf/common.sh@469 -- # nvmfpid=103276 00:26:20.322 22:46:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:20.322 22:46:20 -- nvmf/common.sh@470 -- # waitforlisten 103276 00:26:20.322 22:46:20 -- common/autotest_common.sh@829 -- # '[' -z 103276 ']' 00:26:20.322 22:46:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.322 22:46:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:20.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.322 22:46:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.322 22:46:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:20.322 22:46:20 -- common/autotest_common.sh@10 -- # set +x 00:26:20.322 [2024-11-20 22:46:20.857475] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:20.322 [2024-11-20 22:46:20.857565] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:20.322 [2024-11-20 22:46:21.003194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:20.581 [2024-11-20 22:46:21.099414] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:20.581 [2024-11-20 22:46:21.099622] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:20.581 [2024-11-20 22:46:21.099642] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:20.581 [2024-11-20 22:46:21.099656] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:20.581 [2024-11-20 22:46:21.099849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.581 [2024-11-20 22:46:21.100892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:20.581 [2024-11-20 22:46:21.101309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:20.581 [2024-11-20 22:46:21.101313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.518 22:46:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:21.518 22:46:21 -- common/autotest_common.sh@862 -- # return 0 00:26:21.518 22:46:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:21.518 22:46:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:21.518 22:46:21 -- common/autotest_common.sh@10 -- # set +x 00:26:21.518 22:46:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:21.518 22:46:21 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:21.518 22:46:21 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:26:21.518 22:46:21 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:26:21.518 22:46:21 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:21.518 22:46:21 -- scripts/common.sh@312 -- # local nvmes 00:26:21.518 22:46:21 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:21.518 22:46:21 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:21.518 22:46:21 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:21.518 22:46:21 -- scripts/common.sh@297 -- # local bdf= 00:26:21.518 22:46:21 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:21.518 22:46:21 -- scripts/common.sh@232 -- # local class 00:26:21.518 22:46:21 -- scripts/common.sh@233 -- # local subclass 00:26:21.518 22:46:21 -- scripts/common.sh@234 -- # local progif 00:26:21.518 22:46:21 -- scripts/common.sh@235 -- # printf %02x 1 00:26:21.518 22:46:21 -- scripts/common.sh@235 -- # class=01 00:26:21.518 22:46:21 -- scripts/common.sh@236 -- # printf %02x 8 00:26:21.518 22:46:21 -- scripts/common.sh@236 -- # subclass=08 00:26:21.518 22:46:21 -- scripts/common.sh@237 -- # printf %02x 2 00:26:21.518 22:46:21 -- scripts/common.sh@237 -- # progif=02 00:26:21.518 22:46:21 -- scripts/common.sh@239 -- # hash lspci 00:26:21.518 22:46:21 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:21.518 22:46:21 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:21.518 22:46:21 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:21.518 22:46:21 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:21.518 22:46:21 -- scripts/common.sh@244 -- # tr -d '"' 00:26:21.518 22:46:21 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:21.518 22:46:21 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:21.518 22:46:21 -- scripts/common.sh@15 -- # local i 00:26:21.518 22:46:21 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:21.518 22:46:21 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:21.518 22:46:21 -- scripts/common.sh@24 -- # return 0 00:26:21.518 22:46:21 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:21.518 22:46:21 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:21.518 22:46:21 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:26:21.518 22:46:21 -- scripts/common.sh@15 -- # local i 00:26:21.518 22:46:21 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:26:21.518 22:46:21 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:21.518 22:46:21 -- scripts/common.sh@24 -- # return 0 00:26:21.518 22:46:21 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:26:21.518 22:46:21 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:21.518 22:46:21 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:21.518 22:46:21 -- scripts/common.sh@322 -- # uname -s 00:26:21.518 22:46:21 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:21.518 22:46:21 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:21.518 22:46:21 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:21.518 22:46:21 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:26:21.518 22:46:21 -- scripts/common.sh@322 -- # uname -s 00:26:21.518 22:46:21 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:21.518 22:46:21 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:21.518 22:46:21 -- scripts/common.sh@327 -- # (( 2 )) 00:26:21.518 22:46:21 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:21.518 22:46:21 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:26:21.518 22:46:21 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:26:21.518 22:46:21 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:26:21.518 22:46:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:21.518 22:46:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:21.518 22:46:21 -- common/autotest_common.sh@10 -- # set +x 00:26:21.518 ************************************ 00:26:21.518 START TEST spdk_target_abort 00:26:21.518 ************************************ 00:26:21.518 22:46:21 -- common/autotest_common.sh@1114 -- # spdk_target 00:26:21.518 22:46:21 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:21.518 22:46:21 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:21.518 22:46:21 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:26:21.518 22:46:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.518 22:46:21 -- common/autotest_common.sh@10 -- # set +x 00:26:21.518 spdk_targetn1 00:26:21.518 22:46:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.518 22:46:22 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:21.518 22:46:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.518 22:46:22 -- common/autotest_common.sh@10 -- # set +x 00:26:21.518 [2024-11-20 22:46:22.070548] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:21.518 22:46:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.518 22:46:22 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:26:21.518 22:46:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.518 22:46:22 -- common/autotest_common.sh@10 -- # set +x 00:26:21.518 22:46:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.519 22:46:22 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:26:21.519 22:46:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.519 22:46:22 -- common/autotest_common.sh@10 -- # set +x 00:26:21.519 22:46:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.519 22:46:22 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:26:21.519 22:46:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.519 22:46:22 -- common/autotest_common.sh@10 -- # set +x 00:26:21.519 [2024-11-20 22:46:22.102799] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:21.519 22:46:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.519 22:46:22 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:26:21.519 22:46:22 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:21.519 22:46:22 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:21.519 22:46:22 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:21.519 22:46:22 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:21.519 22:46:22 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:21.519 22:46:22 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:21.519 22:46:22 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:21.519 22:46:22 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:21.519 22:46:22 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:21.519 22:46:22 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:21.519 22:46:22 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:21.519 22:46:22 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:21.519 22:46:22 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:21.519 22:46:22 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:21.519 22:46:22 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:21.519 22:46:22 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:21.519 22:46:22 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:21.519 22:46:22 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:21.519 22:46:22 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:21.519 22:46:22 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:24.806 Initializing NVMe Controllers 00:26:24.806 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:24.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:24.806 Initialization complete. Launching workers. 00:26:24.806 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 9842, failed: 0 00:26:24.806 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1190, failed to submit 8652 00:26:24.806 success 702, unsuccess 488, failed 0 00:26:24.806 22:46:25 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:24.806 22:46:25 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:28.092 Initializing NVMe Controllers 00:26:28.092 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:28.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:28.092 Initialization complete. Launching workers. 00:26:28.092 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5987, failed: 0 00:26:28.092 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1218, failed to submit 4769 00:26:28.092 success 279, unsuccess 939, failed 0 00:26:28.092 22:46:28 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:28.092 22:46:28 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:31.376 Initializing NVMe Controllers 00:26:31.376 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:31.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:31.376 Initialization complete. Launching workers. 00:26:31.376 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 30310, failed: 0 00:26:31.376 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2628, failed to submit 27682 00:26:31.376 success 371, unsuccess 2257, failed 0 00:26:31.376 22:46:31 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:26:31.376 22:46:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.376 22:46:31 -- common/autotest_common.sh@10 -- # set +x 00:26:31.376 22:46:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.376 22:46:31 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:31.376 22:46:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.376 22:46:31 -- common/autotest_common.sh@10 -- # set +x 00:26:31.634 22:46:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.634 22:46:32 -- target/abort_qd_sizes.sh@62 -- # killprocess 103276 00:26:31.634 22:46:32 -- common/autotest_common.sh@936 -- # '[' -z 103276 ']' 00:26:31.634 22:46:32 -- common/autotest_common.sh@940 -- # kill -0 103276 00:26:31.634 22:46:32 -- common/autotest_common.sh@941 -- # uname 00:26:31.634 22:46:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:31.634 22:46:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 103276 00:26:31.634 killing process with pid 103276 00:26:31.634 22:46:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:31.634 22:46:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:31.634 22:46:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 103276' 00:26:31.634 22:46:32 -- common/autotest_common.sh@955 -- # kill 103276 00:26:31.634 22:46:32 -- common/autotest_common.sh@960 -- # wait 103276 00:26:31.892 ************************************ 00:26:31.892 END TEST spdk_target_abort 00:26:31.892 ************************************ 00:26:31.892 00:26:31.892 real 0m10.582s 00:26:31.892 user 0m43.075s 00:26:31.892 sys 0m1.956s 00:26:31.892 22:46:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:31.892 22:46:32 -- common/autotest_common.sh@10 -- # set +x 00:26:31.892 22:46:32 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:26:31.892 22:46:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:31.892 22:46:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:31.892 22:46:32 -- common/autotest_common.sh@10 -- # set +x 00:26:32.150 ************************************ 00:26:32.150 START TEST kernel_target_abort 00:26:32.150 ************************************ 00:26:32.150 22:46:32 -- common/autotest_common.sh@1114 -- # kernel_target 00:26:32.150 22:46:32 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:26:32.150 22:46:32 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:26:32.150 22:46:32 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:26:32.150 22:46:32 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:26:32.150 22:46:32 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:26:32.150 22:46:32 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:32.150 22:46:32 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:32.150 22:46:32 -- nvmf/common.sh@627 -- # local block nvme 00:26:32.150 22:46:32 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:26:32.150 22:46:32 -- nvmf/common.sh@630 -- # modprobe nvmet 00:26:32.150 22:46:32 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:32.150 22:46:32 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:32.408 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:32.409 Waiting for block devices as requested 00:26:32.409 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:32.409 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:32.667 22:46:33 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:32.667 22:46:33 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:32.667 22:46:33 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:26:32.667 22:46:33 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:26:32.667 22:46:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:32.667 No valid GPT data, bailing 00:26:32.667 22:46:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:32.667 22:46:33 -- scripts/common.sh@393 -- # pt= 00:26:32.667 22:46:33 -- scripts/common.sh@394 -- # return 1 00:26:32.667 22:46:33 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:26:32.667 22:46:33 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:32.667 22:46:33 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:32.667 22:46:33 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:26:32.667 22:46:33 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:26:32.667 22:46:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:32.667 No valid GPT data, bailing 00:26:32.667 22:46:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:32.667 22:46:33 -- scripts/common.sh@393 -- # pt= 00:26:32.667 22:46:33 -- scripts/common.sh@394 -- # return 1 00:26:32.667 22:46:33 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:26:32.667 22:46:33 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:32.667 22:46:33 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:26:32.667 22:46:33 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:26:32.667 22:46:33 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:26:32.667 22:46:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:26:32.667 No valid GPT data, bailing 00:26:32.667 22:46:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:26:32.667 22:46:33 -- scripts/common.sh@393 -- # pt= 00:26:32.667 22:46:33 -- scripts/common.sh@394 -- # return 1 00:26:32.667 22:46:33 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:26:32.667 22:46:33 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:32.667 22:46:33 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:26:32.667 22:46:33 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:26:32.667 22:46:33 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:26:32.926 22:46:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:26:32.926 No valid GPT data, bailing 00:26:32.926 22:46:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:26:32.926 22:46:33 -- scripts/common.sh@393 -- # pt= 00:26:32.926 22:46:33 -- scripts/common.sh@394 -- # return 1 00:26:32.926 22:46:33 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:26:32.926 22:46:33 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:26:32.926 22:46:33 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:32.926 22:46:33 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:32.926 22:46:33 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:32.926 22:46:33 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:26:32.926 22:46:33 -- nvmf/common.sh@654 -- # echo 1 00:26:32.926 22:46:33 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:26:32.926 22:46:33 -- nvmf/common.sh@656 -- # echo 1 00:26:32.926 22:46:33 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:26:32.926 22:46:33 -- nvmf/common.sh@663 -- # echo tcp 00:26:32.926 22:46:33 -- nvmf/common.sh@664 -- # echo 4420 00:26:32.926 22:46:33 -- nvmf/common.sh@665 -- # echo ipv4 00:26:32.926 22:46:33 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:32.926 22:46:33 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c4482f2d-46a3-481d-af07-c04f1b86df27 --hostid=c4482f2d-46a3-481d-af07-c04f1b86df27 -a 10.0.0.1 -t tcp -s 4420 00:26:32.926 00:26:32.926 Discovery Log Number of Records 2, Generation counter 2 00:26:32.926 =====Discovery Log Entry 0====== 00:26:32.926 trtype: tcp 00:26:32.926 adrfam: ipv4 00:26:32.926 subtype: current discovery subsystem 00:26:32.926 treq: not specified, sq flow control disable supported 00:26:32.926 portid: 1 00:26:32.926 trsvcid: 4420 00:26:32.926 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:32.926 traddr: 10.0.0.1 00:26:32.926 eflags: none 00:26:32.926 sectype: none 00:26:32.926 =====Discovery Log Entry 1====== 00:26:32.926 trtype: tcp 00:26:32.926 adrfam: ipv4 00:26:32.926 subtype: nvme subsystem 00:26:32.926 treq: not specified, sq flow control disable supported 00:26:32.926 portid: 1 00:26:32.926 trsvcid: 4420 00:26:32.926 subnqn: kernel_target 00:26:32.926 traddr: 10.0.0.1 00:26:32.926 eflags: none 00:26:32.926 sectype: none 00:26:32.926 22:46:33 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:26:32.926 22:46:33 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:32.926 22:46:33 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:32.926 22:46:33 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:32.926 22:46:33 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:32.926 22:46:33 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:26:32.926 22:46:33 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:32.926 22:46:33 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:32.926 22:46:33 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:32.926 22:46:33 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:32.926 22:46:33 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:32.926 22:46:33 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:32.926 22:46:33 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:32.926 22:46:33 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:32.926 22:46:33 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:32.926 22:46:33 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:32.926 22:46:33 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:32.926 22:46:33 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:32.926 22:46:33 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:32.926 22:46:33 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:32.927 22:46:33 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:36.211 Initializing NVMe Controllers 00:26:36.211 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:36.211 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:36.211 Initialization complete. Launching workers. 00:26:36.211 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 32872, failed: 0 00:26:36.211 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 32872, failed to submit 0 00:26:36.211 success 0, unsuccess 32872, failed 0 00:26:36.211 22:46:36 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:36.211 22:46:36 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:39.497 Initializing NVMe Controllers 00:26:39.497 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:39.497 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:39.497 Initialization complete. Launching workers. 00:26:39.497 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 79799, failed: 0 00:26:39.497 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 34015, failed to submit 45784 00:26:39.497 success 0, unsuccess 34015, failed 0 00:26:39.497 22:46:39 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:39.497 22:46:39 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:42.786 Initializing NVMe Controllers 00:26:42.786 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:42.786 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:42.786 Initialization complete. Launching workers. 00:26:42.786 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 97509, failed: 0 00:26:42.786 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 24414, failed to submit 73095 00:26:42.786 success 0, unsuccess 24414, failed 0 00:26:42.786 22:46:43 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:26:42.786 22:46:43 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:26:42.786 22:46:43 -- nvmf/common.sh@677 -- # echo 0 00:26:42.786 22:46:43 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:26:42.786 22:46:43 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:42.786 22:46:43 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:42.786 22:46:43 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:42.786 22:46:43 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:26:42.786 22:46:43 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:26:42.786 ************************************ 00:26:42.786 END TEST kernel_target_abort 00:26:42.786 ************************************ 00:26:42.786 00:26:42.786 real 0m10.475s 00:26:42.786 user 0m5.481s 00:26:42.786 sys 0m2.194s 00:26:42.786 22:46:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:42.786 22:46:43 -- common/autotest_common.sh@10 -- # set +x 00:26:42.786 22:46:43 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:26:42.786 22:46:43 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:26:42.786 22:46:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:42.786 22:46:43 -- nvmf/common.sh@116 -- # sync 00:26:42.786 22:46:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:42.786 22:46:43 -- nvmf/common.sh@119 -- # set +e 00:26:42.786 22:46:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:42.786 22:46:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:42.786 rmmod nvme_tcp 00:26:42.786 rmmod nvme_fabrics 00:26:42.786 rmmod nvme_keyring 00:26:42.786 22:46:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:42.786 22:46:43 -- nvmf/common.sh@123 -- # set -e 00:26:42.786 22:46:43 -- nvmf/common.sh@124 -- # return 0 00:26:42.786 22:46:43 -- nvmf/common.sh@477 -- # '[' -n 103276 ']' 00:26:42.786 22:46:43 -- nvmf/common.sh@478 -- # killprocess 103276 00:26:42.786 22:46:43 -- common/autotest_common.sh@936 -- # '[' -z 103276 ']' 00:26:42.786 22:46:43 -- common/autotest_common.sh@940 -- # kill -0 103276 00:26:42.786 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (103276) - No such process 00:26:42.786 Process with pid 103276 is not found 00:26:42.786 22:46:43 -- common/autotest_common.sh@963 -- # echo 'Process with pid 103276 is not found' 00:26:42.786 22:46:43 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:42.786 22:46:43 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:43.352 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:43.352 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:43.352 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:43.352 22:46:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:43.352 22:46:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:43.352 22:46:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:43.352 22:46:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:43.352 22:46:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.352 22:46:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:43.352 22:46:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.611 22:46:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:43.611 00:26:43.611 real 0m24.833s 00:26:43.611 user 0m50.095s 00:26:43.611 sys 0m5.615s 00:26:43.611 22:46:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:43.611 22:46:44 -- common/autotest_common.sh@10 -- # set +x 00:26:43.611 ************************************ 00:26:43.611 END TEST nvmf_abort_qd_sizes 00:26:43.611 ************************************ 00:26:43.611 22:46:44 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:26:43.611 22:46:44 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:26:43.611 22:46:44 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:26:43.611 22:46:44 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:26:43.611 22:46:44 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:26:43.611 22:46:44 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:26:43.611 22:46:44 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:26:43.611 22:46:44 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:26:43.611 22:46:44 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:26:43.611 22:46:44 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:43.611 22:46:44 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:43.611 22:46:44 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:26:43.611 22:46:44 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:26:43.611 22:46:44 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:26:43.611 22:46:44 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:26:43.611 22:46:44 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:26:43.611 22:46:44 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:26:43.611 22:46:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:43.611 22:46:44 -- common/autotest_common.sh@10 -- # set +x 00:26:43.611 22:46:44 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:26:43.611 22:46:44 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:26:43.611 22:46:44 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:26:43.611 22:46:44 -- common/autotest_common.sh@10 -- # set +x 00:26:45.516 INFO: APP EXITING 00:26:45.516 INFO: killing all VMs 00:26:45.516 INFO: killing vhost app 00:26:45.516 INFO: EXIT DONE 00:26:46.084 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:46.343 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:46.343 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:46.910 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:46.910 Cleaning 00:26:46.910 Removing: /var/run/dpdk/spdk0/config 00:26:47.169 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:47.169 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:47.169 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:47.169 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:47.169 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:47.169 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:47.169 Removing: /var/run/dpdk/spdk1/config 00:26:47.169 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:26:47.169 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:26:47.169 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:26:47.169 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:26:47.169 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:26:47.169 Removing: /var/run/dpdk/spdk1/hugepage_info 00:26:47.169 Removing: /var/run/dpdk/spdk2/config 00:26:47.169 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:26:47.169 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:26:47.169 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:26:47.169 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:26:47.169 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:26:47.169 Removing: /var/run/dpdk/spdk2/hugepage_info 00:26:47.169 Removing: /var/run/dpdk/spdk3/config 00:26:47.169 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:26:47.169 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:26:47.169 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:26:47.169 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:26:47.169 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:26:47.169 Removing: /var/run/dpdk/spdk3/hugepage_info 00:26:47.169 Removing: /var/run/dpdk/spdk4/config 00:26:47.169 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:26:47.169 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:26:47.169 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:26:47.169 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:26:47.169 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:26:47.169 Removing: /var/run/dpdk/spdk4/hugepage_info 00:26:47.169 Removing: /dev/shm/nvmf_trace.0 00:26:47.169 Removing: /dev/shm/spdk_tgt_trace.pid67298 00:26:47.169 Removing: /var/run/dpdk/spdk0 00:26:47.169 Removing: /var/run/dpdk/spdk1 00:26:47.169 Removing: /var/run/dpdk/spdk2 00:26:47.169 Removing: /var/run/dpdk/spdk3 00:26:47.169 Removing: /var/run/dpdk/spdk4 00:26:47.169 Removing: /var/run/dpdk/spdk_pid100250 00:26:47.169 Removing: /var/run/dpdk/spdk_pid100455 00:26:47.169 Removing: /var/run/dpdk/spdk_pid100749 00:26:47.169 Removing: /var/run/dpdk/spdk_pid101056 00:26:47.169 Removing: /var/run/dpdk/spdk_pid101606 00:26:47.169 Removing: /var/run/dpdk/spdk_pid101612 00:26:47.169 Removing: /var/run/dpdk/spdk_pid101979 00:26:47.169 Removing: /var/run/dpdk/spdk_pid102141 00:26:47.169 Removing: /var/run/dpdk/spdk_pid102298 00:26:47.169 Removing: /var/run/dpdk/spdk_pid102389 00:26:47.169 Removing: /var/run/dpdk/spdk_pid102554 00:26:47.169 Removing: /var/run/dpdk/spdk_pid102664 00:26:47.169 Removing: /var/run/dpdk/spdk_pid103345 00:26:47.169 Removing: /var/run/dpdk/spdk_pid103379 00:26:47.169 Removing: /var/run/dpdk/spdk_pid103416 00:26:47.169 Removing: /var/run/dpdk/spdk_pid103661 00:26:47.169 Removing: /var/run/dpdk/spdk_pid103698 00:26:47.169 Removing: /var/run/dpdk/spdk_pid103734 00:26:47.169 Removing: /var/run/dpdk/spdk_pid67135 00:26:47.169 Removing: /var/run/dpdk/spdk_pid67298 00:26:47.169 Removing: /var/run/dpdk/spdk_pid67619 00:26:47.169 Removing: /var/run/dpdk/spdk_pid67894 00:26:47.169 Removing: /var/run/dpdk/spdk_pid68066 00:26:47.169 Removing: /var/run/dpdk/spdk_pid68155 00:26:47.169 Removing: /var/run/dpdk/spdk_pid68254 00:26:47.169 Removing: /var/run/dpdk/spdk_pid68345 00:26:47.169 Removing: /var/run/dpdk/spdk_pid68389 00:26:47.169 Removing: /var/run/dpdk/spdk_pid68419 00:26:47.169 Removing: /var/run/dpdk/spdk_pid68483 00:26:47.169 Removing: /var/run/dpdk/spdk_pid68605 00:26:47.169 Removing: /var/run/dpdk/spdk_pid69237 00:26:47.169 Removing: /var/run/dpdk/spdk_pid69301 00:26:47.169 Removing: /var/run/dpdk/spdk_pid69370 00:26:47.169 Removing: /var/run/dpdk/spdk_pid69393 00:26:47.429 Removing: /var/run/dpdk/spdk_pid69477 00:26:47.429 Removing: /var/run/dpdk/spdk_pid69505 00:26:47.429 Removing: /var/run/dpdk/spdk_pid69584 00:26:47.429 Removing: /var/run/dpdk/spdk_pid69612 00:26:47.429 Removing: /var/run/dpdk/spdk_pid69669 00:26:47.429 Removing: /var/run/dpdk/spdk_pid69699 00:26:47.429 Removing: /var/run/dpdk/spdk_pid69745 00:26:47.429 Removing: /var/run/dpdk/spdk_pid69775 00:26:47.429 Removing: /var/run/dpdk/spdk_pid69934 00:26:47.429 Removing: /var/run/dpdk/spdk_pid69964 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70050 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70115 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70145 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70198 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70223 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70252 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70277 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70308 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70328 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70362 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70382 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70416 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70436 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70470 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70490 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70524 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70546 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70581 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70603 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70632 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70657 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70686 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70711 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70740 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70765 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70794 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70819 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70848 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70873 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70902 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70922 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70956 00:26:47.429 Removing: /var/run/dpdk/spdk_pid70976 00:26:47.429 Removing: /var/run/dpdk/spdk_pid71010 00:26:47.429 Removing: /var/run/dpdk/spdk_pid71030 00:26:47.429 Removing: /var/run/dpdk/spdk_pid71064 00:26:47.429 Removing: /var/run/dpdk/spdk_pid71087 00:26:47.429 Removing: /var/run/dpdk/spdk_pid71124 00:26:47.429 Removing: /var/run/dpdk/spdk_pid71147 00:26:47.429 Removing: /var/run/dpdk/spdk_pid71184 00:26:47.429 Removing: /var/run/dpdk/spdk_pid71204 00:26:47.429 Removing: /var/run/dpdk/spdk_pid71238 00:26:47.429 Removing: /var/run/dpdk/spdk_pid71258 00:26:47.429 Removing: /var/run/dpdk/spdk_pid71288 00:26:47.429 Removing: /var/run/dpdk/spdk_pid71365 00:26:47.429 Removing: /var/run/dpdk/spdk_pid71469 00:26:47.429 Removing: /var/run/dpdk/spdk_pid71907 00:26:47.429 Removing: /var/run/dpdk/spdk_pid78888 00:26:47.429 Removing: /var/run/dpdk/spdk_pid79240 00:26:47.429 Removing: /var/run/dpdk/spdk_pid81687 00:26:47.429 Removing: /var/run/dpdk/spdk_pid82071 00:26:47.429 Removing: /var/run/dpdk/spdk_pid82337 00:26:47.429 Removing: /var/run/dpdk/spdk_pid82383 00:26:47.429 Removing: /var/run/dpdk/spdk_pid82694 00:26:47.429 Removing: /var/run/dpdk/spdk_pid82750 00:26:47.429 Removing: /var/run/dpdk/spdk_pid83129 00:26:47.429 Removing: /var/run/dpdk/spdk_pid83665 00:26:47.429 Removing: /var/run/dpdk/spdk_pid84092 00:26:47.429 Removing: /var/run/dpdk/spdk_pid85044 00:26:47.429 Removing: /var/run/dpdk/spdk_pid86040 00:26:47.429 Removing: /var/run/dpdk/spdk_pid86163 00:26:47.429 Removing: /var/run/dpdk/spdk_pid86227 00:26:47.429 Removing: /var/run/dpdk/spdk_pid87719 00:26:47.429 Removing: /var/run/dpdk/spdk_pid87965 00:26:47.689 Removing: /var/run/dpdk/spdk_pid88398 00:26:47.689 Removing: /var/run/dpdk/spdk_pid88508 00:26:47.689 Removing: /var/run/dpdk/spdk_pid88661 00:26:47.689 Removing: /var/run/dpdk/spdk_pid88708 00:26:47.689 Removing: /var/run/dpdk/spdk_pid88748 00:26:47.689 Removing: /var/run/dpdk/spdk_pid88794 00:26:47.689 Removing: /var/run/dpdk/spdk_pid88957 00:26:47.689 Removing: /var/run/dpdk/spdk_pid89105 00:26:47.689 Removing: /var/run/dpdk/spdk_pid89369 00:26:47.689 Removing: /var/run/dpdk/spdk_pid89487 00:26:47.689 Removing: /var/run/dpdk/spdk_pid89908 00:26:47.689 Removing: /var/run/dpdk/spdk_pid90300 00:26:47.689 Removing: /var/run/dpdk/spdk_pid90303 00:26:47.689 Removing: /var/run/dpdk/spdk_pid92565 00:26:47.689 Removing: /var/run/dpdk/spdk_pid92878 00:26:47.689 Removing: /var/run/dpdk/spdk_pid93389 00:26:47.689 Removing: /var/run/dpdk/spdk_pid93402 00:26:47.689 Removing: /var/run/dpdk/spdk_pid93743 00:26:47.689 Removing: /var/run/dpdk/spdk_pid93764 00:26:47.689 Removing: /var/run/dpdk/spdk_pid93778 00:26:47.689 Removing: /var/run/dpdk/spdk_pid93803 00:26:47.689 Removing: /var/run/dpdk/spdk_pid93814 00:26:47.689 Removing: /var/run/dpdk/spdk_pid93958 00:26:47.689 Removing: /var/run/dpdk/spdk_pid93961 00:26:47.689 Removing: /var/run/dpdk/spdk_pid94064 00:26:47.689 Removing: /var/run/dpdk/spdk_pid94077 00:26:47.689 Removing: /var/run/dpdk/spdk_pid94184 00:26:47.689 Removing: /var/run/dpdk/spdk_pid94187 00:26:47.689 Removing: /var/run/dpdk/spdk_pid94665 00:26:47.689 Removing: /var/run/dpdk/spdk_pid94718 00:26:47.689 Removing: /var/run/dpdk/spdk_pid94865 00:26:47.689 Removing: /var/run/dpdk/spdk_pid94986 00:26:47.689 Removing: /var/run/dpdk/spdk_pid95381 00:26:47.689 Removing: /var/run/dpdk/spdk_pid95638 00:26:47.689 Removing: /var/run/dpdk/spdk_pid96141 00:26:47.689 Removing: /var/run/dpdk/spdk_pid96702 00:26:47.689 Removing: /var/run/dpdk/spdk_pid97174 00:26:47.689 Removing: /var/run/dpdk/spdk_pid97244 00:26:47.689 Removing: /var/run/dpdk/spdk_pid97336 00:26:47.689 Removing: /var/run/dpdk/spdk_pid97408 00:26:47.689 Removing: /var/run/dpdk/spdk_pid97552 00:26:47.689 Removing: /var/run/dpdk/spdk_pid97642 00:26:47.689 Removing: /var/run/dpdk/spdk_pid97727 00:26:47.689 Removing: /var/run/dpdk/spdk_pid97823 00:26:47.689 Removing: /var/run/dpdk/spdk_pid98179 00:26:47.689 Removing: /var/run/dpdk/spdk_pid98893 00:26:47.689 Clean 00:26:47.689 killing process with pid 61523 00:26:47.948 killing process with pid 61524 00:26:47.948 22:46:48 -- common/autotest_common.sh@1446 -- # return 0 00:26:47.948 22:46:48 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:26:47.948 22:46:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:47.948 22:46:48 -- common/autotest_common.sh@10 -- # set +x 00:26:47.948 22:46:48 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:26:47.948 22:46:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:47.948 22:46:48 -- common/autotest_common.sh@10 -- # set +x 00:26:47.948 22:46:48 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:47.948 22:46:48 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:47.948 22:46:48 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:47.948 22:46:48 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:26:47.948 22:46:48 -- spdk/autotest.sh@383 -- # hostname 00:26:47.948 22:46:48 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:48.211 geninfo: WARNING: invalid characters removed from testname! 00:27:10.186 22:47:07 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:10.445 22:47:10 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:12.348 22:47:13 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:14.885 22:47:15 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:16.792 22:47:17 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:18.695 22:47:19 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:21.228 22:47:21 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:21.228 22:47:21 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:27:21.228 22:47:21 -- common/autotest_common.sh@1690 -- $ lcov --version 00:27:21.228 22:47:21 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:27:21.228 22:47:21 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:27:21.228 22:47:21 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:27:21.228 22:47:21 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:27:21.228 22:47:21 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:27:21.228 22:47:21 -- scripts/common.sh@335 -- $ IFS=.-: 00:27:21.228 22:47:21 -- scripts/common.sh@335 -- $ read -ra ver1 00:27:21.228 22:47:21 -- scripts/common.sh@336 -- $ IFS=.-: 00:27:21.228 22:47:21 -- scripts/common.sh@336 -- $ read -ra ver2 00:27:21.228 22:47:21 -- scripts/common.sh@337 -- $ local 'op=<' 00:27:21.228 22:47:21 -- scripts/common.sh@339 -- $ ver1_l=2 00:27:21.228 22:47:21 -- scripts/common.sh@340 -- $ ver2_l=1 00:27:21.228 22:47:21 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:27:21.228 22:47:21 -- scripts/common.sh@343 -- $ case "$op" in 00:27:21.228 22:47:21 -- scripts/common.sh@344 -- $ : 1 00:27:21.228 22:47:21 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:27:21.228 22:47:21 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:21.228 22:47:21 -- scripts/common.sh@364 -- $ decimal 1 00:27:21.228 22:47:21 -- scripts/common.sh@352 -- $ local d=1 00:27:21.228 22:47:21 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:27:21.228 22:47:21 -- scripts/common.sh@354 -- $ echo 1 00:27:21.228 22:47:21 -- scripts/common.sh@364 -- $ ver1[v]=1 00:27:21.228 22:47:21 -- scripts/common.sh@365 -- $ decimal 2 00:27:21.228 22:47:21 -- scripts/common.sh@352 -- $ local d=2 00:27:21.228 22:47:21 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:27:21.228 22:47:21 -- scripts/common.sh@354 -- $ echo 2 00:27:21.228 22:47:21 -- scripts/common.sh@365 -- $ ver2[v]=2 00:27:21.228 22:47:21 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:27:21.228 22:47:21 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:27:21.228 22:47:21 -- scripts/common.sh@367 -- $ return 0 00:27:21.228 22:47:21 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:21.228 22:47:21 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:27:21.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.228 --rc genhtml_branch_coverage=1 00:27:21.228 --rc genhtml_function_coverage=1 00:27:21.228 --rc genhtml_legend=1 00:27:21.228 --rc geninfo_all_blocks=1 00:27:21.228 --rc geninfo_unexecuted_blocks=1 00:27:21.228 00:27:21.228 ' 00:27:21.228 22:47:21 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:27:21.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.228 --rc genhtml_branch_coverage=1 00:27:21.228 --rc genhtml_function_coverage=1 00:27:21.228 --rc genhtml_legend=1 00:27:21.228 --rc geninfo_all_blocks=1 00:27:21.228 --rc geninfo_unexecuted_blocks=1 00:27:21.228 00:27:21.228 ' 00:27:21.228 22:47:21 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:27:21.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.229 --rc genhtml_branch_coverage=1 00:27:21.229 --rc genhtml_function_coverage=1 00:27:21.229 --rc genhtml_legend=1 00:27:21.229 --rc geninfo_all_blocks=1 00:27:21.229 --rc geninfo_unexecuted_blocks=1 00:27:21.229 00:27:21.229 ' 00:27:21.229 22:47:21 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:27:21.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.229 --rc genhtml_branch_coverage=1 00:27:21.229 --rc genhtml_function_coverage=1 00:27:21.229 --rc genhtml_legend=1 00:27:21.229 --rc geninfo_all_blocks=1 00:27:21.229 --rc geninfo_unexecuted_blocks=1 00:27:21.229 00:27:21.229 ' 00:27:21.229 22:47:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:21.229 22:47:21 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:21.229 22:47:21 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:21.229 22:47:21 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:21.229 22:47:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.229 22:47:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.229 22:47:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.229 22:47:21 -- paths/export.sh@5 -- $ export PATH 00:27:21.229 22:47:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.229 22:47:21 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:21.229 22:47:21 -- common/autobuild_common.sh@440 -- $ date +%s 00:27:21.229 22:47:21 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732142841.XXXXXX 00:27:21.229 22:47:21 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732142841.ivxLDW 00:27:21.229 22:47:21 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:27:21.229 22:47:21 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:27:21.229 22:47:21 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:27:21.229 22:47:21 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:27:21.229 22:47:21 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:21.229 22:47:21 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:21.229 22:47:21 -- common/autobuild_common.sh@456 -- $ get_config_params 00:27:21.229 22:47:21 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:27:21.229 22:47:21 -- common/autotest_common.sh@10 -- $ set +x 00:27:21.229 22:47:21 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:27:21.229 22:47:21 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:21.229 22:47:21 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:21.229 22:47:21 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:21.229 22:47:21 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:21.229 22:47:21 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:21.229 22:47:21 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:21.229 22:47:21 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:21.229 22:47:21 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:21.229 22:47:21 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:21.229 22:47:21 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:21.229 + [[ -n 5964 ]] 00:27:21.229 + sudo kill 5964 00:27:21.238 [Pipeline] } 00:27:21.253 [Pipeline] // timeout 00:27:21.258 [Pipeline] } 00:27:21.274 [Pipeline] // stage 00:27:21.279 [Pipeline] } 00:27:21.293 [Pipeline] // catchError 00:27:21.303 [Pipeline] stage 00:27:21.305 [Pipeline] { (Stop VM) 00:27:21.317 [Pipeline] sh 00:27:21.599 + vagrant halt 00:27:24.889 ==> default: Halting domain... 00:27:31.474 [Pipeline] sh 00:27:31.752 + vagrant destroy -f 00:27:34.284 ==> default: Removing domain... 00:27:34.297 [Pipeline] sh 00:27:34.580 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:27:34.589 [Pipeline] } 00:27:34.603 [Pipeline] // stage 00:27:34.608 [Pipeline] } 00:27:34.623 [Pipeline] // dir 00:27:34.628 [Pipeline] } 00:27:34.639 [Pipeline] // wrap 00:27:34.643 [Pipeline] } 00:27:34.652 [Pipeline] // catchError 00:27:34.660 [Pipeline] stage 00:27:34.661 [Pipeline] { (Epilogue) 00:27:34.671 [Pipeline] sh 00:27:34.948 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:39.176 [Pipeline] catchError 00:27:39.178 [Pipeline] { 00:27:39.190 [Pipeline] sh 00:27:39.470 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:39.729 Artifacts sizes are good 00:27:39.738 [Pipeline] } 00:27:39.752 [Pipeline] // catchError 00:27:39.763 [Pipeline] archiveArtifacts 00:27:39.770 Archiving artifacts 00:27:39.886 [Pipeline] cleanWs 00:27:39.897 [WS-CLEANUP] Deleting project workspace... 00:27:39.897 [WS-CLEANUP] Deferred wipeout is used... 00:27:39.903 [WS-CLEANUP] done 00:27:39.905 [Pipeline] } 00:27:39.921 [Pipeline] // stage 00:27:39.926 [Pipeline] } 00:27:39.949 [Pipeline] // node 00:27:39.954 [Pipeline] End of Pipeline 00:27:39.995 Finished: SUCCESS